‹ All episodes

Emerging Cybersecurity Risks

Incorporating AI in Risk Management: Challenges and Potential Benefits with Jeff Lowder, Co-Founder of The Society of Information Risk Analysts

👉 Importance of human input

👉 Leveraging AI for quantitative methods to create a new field within risk management

👉 The moral, ethical, and safety concerns associated with AI

SHARE EPISODE

Welcome to this episode of the Emerging Cyber Risk podcast, brought to you by Ignyte and Secure Robotics, where we share our expertise on cyber risk and AI to help you prepare for the risk management of emerging technologies. We are your hosts, Max Aulakh and Joel Yonts. Today’s guest is Jeff Lowder, the Co-Founder and Past President of The Society of Information Risk Analysts, a society dedicated to continually improving the practice of information risk analysis. Our discussion today focuses on the emerging cyber risks and ethical concerns associated with AI in enterprise risk management, highlighting the challenges of managing risks, the need for interdisciplinary translation, and the importance of accurate language and calibrated estimations in risk management.

Topics we discuss:

  • The challenges and potential benefits of incorporating AI in risk management
  • The importance of human input in Bayesian Belief Networks
  • Leveraging AI for quantitative methods to create a new field within risk management
  • The moral, ethical, and safety concerns associated with AI

 

Jeff Lowder Bio:

Jeff is a former Chief Information Security Officer and Chief Privacy Officer with a passion for cyber risk quantification and management. As the Co-Founder and Past President of The Society of Information Risk Analysts, he is currently working towards offering a certification on Cyber Risk Quantification. He has built multiple successful security and privacy programs, established an Information Security Management System using the ISO 27001 framework, and has deep knowledge and understanding of other frameworks such as COBIT, NIST 800-53 | CSF | RMF, FedRAMP, DISA CC SRG IL4-5, PCI DSS, and SOC2. 

Jeff Lowder on LinkedIn

Society of Information Risk Analysts Website

 

Get to Know Your Hosts:

Max Aulakh Bio:

Max is the CEO of Ignyte Assurance Platform and a Data Security and Compliance leader delivering DoD-tested security strategies and compliance that safeguard mission-critical IT operations. He has trained and excelled while working for the United States Air Force. He maintained and tested the InfoSec and ComSec functions of network hardware, software, and IT infrastructure for global unclassified and classified networks.

Max Aulakh on LinkedIn

Ignyte Assurance Platform Website

 

Joel Yonts Bio:

Joel is CEO & Research Scientist at Secure Robotics and the Chief Research Officer & Strategist at Malicious Streams. Joel is a Security Strategist, innovator, advisor, and seasoned security executive with a passion for information security research. He has over 25 years of diverse Information Technology experience with an emphasis on Cybersecurity. Joel is also an accomplished speaker, writer, and software developer with research interests in enterprise security, digital forensics, artificial intelligence, and robotic & IoT systems.

Joel Yonts on LinkedIn

Secure Robotics Website

Malicious Streams Website

Max – 00:00:03: Welcome to Emerging Cyber Risk, a podcast by Ignyte and Secure Robotics. We share our expertise on Cyber Risk and Artificial Intelligence to help you prepare for risk management of emerging technologies. We’re your host, Max Aulakh.

 

Joel – 00:00:18: And Joel Yonts. Join us as we dive into the development of AI, the evolution in cybersecurity, and other topics driving change in the cyber risk outlook.


Max – 00:00:27: Thank you everyone for joining us today. Today, we’ve got an exciting topic around Artificial Intelligence Risk Management, but let’s get to it.

 
Joel – 00:00:36: Thanks, Max. Today’s topic is Enterprise Risk Management, a topic that is near and dear to my heart as a CISO. It has guided programs throughout multiple decades now. It’s a mature practice in the way we talk to our executives. But I think of late, the thing that’s been on my mind and a lot of people’s minds is what does AI and some of the emerging risks matter? How does it incorporate into this framework? So we’re gonna talk about that today. We’re gonna hash it out, share some thoughts, and maybe come away with more questions than answers, but certainly, we’re gonna dive into it.

 

Max – 00:01:07: Awesome, awesome. So Joel, thank you for that. Before we dive into some of these exciting things, I’ve got my friend here, Jeff. I know you’ve been doing this. You wrote a book on some of this stuff. Tell us a little bit about yourself, Jeff, your story. Tell us about your book and then also your background and everything. And then, we’ll get right into some of the meat of the discussion.

 

Jeff – 00:01:28: Yeah, thanks for having me on Max and Joel. My name is Jeff Lowder, and I am a former Chief Information Security Officer and Chief Privacy Officer who has a 

 

real passion for cyber risk quantification and management. I’ve been doing this for about 27 years, and I am the Co-Founder and past President of the Society of 

 

Information Risk Analysts website is societyinforisks.org. And one of the things that the society hopes to do eventually is to offer a certification focused specifically on cyber risk quantification. And to make that possible, I have been working on for roughly a decade a study guide for the, to be created certification in that book that you mentioned. It’s currently 145,000 words long. It’s called the Guide to the Information Risk Management Body of Knowledge. And it reads more like a university textbook than a typical certs prep guide. So this is pretty meaty. And I’ve actually had it reviewed by a Ph.D. in risk management out of Stanford, who gave me some very kind words. But I don’t want to make this just about the book. All of this is just to underscore that I’ve been doing and thinking about risk for a long time. And I think that cyber risk quantification provides CISOs and Chief Privacy Officers the opportunity to speak the language of the business. No other part of the business talks when they’re making a request for red, yellow, green funding. No other part of the business talks about, well, this is a high sales opportunity. I mean, they might say that, but they also have to provide real numbers and give a revenue forecast. And so if we can train other InfoSec and privacy professionals to use some of these same methods to quantify what they’re talking about with the caveat that quantification does not equal precision, then we’ll be speaking the language of business, and we have a much better chance for success.

 

Max – 00:03:32: Awesome. Yeah, I think one of the things that you mentioned, Jeff, is the language itself. You’re absolutely right. When information security professionals when we speak about risk, we are color-coding things, whereas other parts of the business are not, right? So yeah, that’s the very important thing. And then, of course, the methods as well.

 

Joel – 00:03:52: I like what you said about precision too because I think that’s a big thing when I think about risk. So, I’m looking forward to reading your book and getting into that. And I think that precision piece, you don’t have to be exactly precise. And I think if I know what you mean about the exact calculation, you just need to be directionally right so that you compare apples to oranges if you need to.

 

Jeff – 00:04:13: Yeah, you nailed it. So a lot of people, especially those with, in my experience, engineering backgrounds, when they hear the word quantitative, they go back to their college Intro to Stats class. And they think, well, how am I supposed to do this? I don’t have a big body of actuarial data to work from. And they’re assuming something called a frequentist interpretation of probability. But there’s a rival school of thought that I would argue is really the dominant school of thought among risk management practitioners outside of InfoSec and privacy and definitely very popular among professional statisticians, which is called the Bayesian interpretation of probability. And so it’s really just about measuring your own uncertainty. When you’re trying to make a decision between two or three alternatives as a CEO or a CFO or a Board, you don’t need it quantified down to the exact dollars and cents. Usually, rough orders of magnitude are all you need to differentiate the different options. And so your estimate of the annualized loss expectancy might look like, you know, I’m 90% confident that if we don’t do anything that we’re gonna lose between $1 and $6 million in the next five years. Whereas if we go with option B, my recommendation, we can lower that down to a range of 400,000 to 1.2 million. Those are still really big ranges, but they’re more than enough to get the point across.

 

Max – 00:05:39: Yeah, I think when we look at quantifications, it’s a hard science, right? And then, when you throw in a word like Bayesian network or a Bayesian belief network, some of these things are information security professionals, right? When we look at our traditional books like CISSP books and all that, they’re barely teaching any of this stuff, right? It’s hard to find it, which goes to some of the gaps in the market, right? And that’s why you wrote this knowledge book. But I wanna talk about how we accelerate that, right? Because some of this requires a new language. And we mentioned that the vernacular we used is incorrect. And there’s this new art form happening right now with generative AI. Maybe it’ll help us fix the language problem. I don’t know, but I think if we’re not using the right words, we’re not going to be able to do the right calculations or even get close to it. Right. Because of what you just mentioned right there, Jeff, I don’t think a lot of people even know what a Bayesian network might be. I’m certainly not an expert in that area, but I’ve heard of it at least to give it a start, right?

 

 

Jeff – 00:06:42: Yeah, absolutely. And I want to be clear that there’s a distinction between a Bayesian network, which I actually haven’t mentioned on this call, that you and I have talked about before this call, and a Bayesian interpretation of probability. So, the basic idea is that when people take statistics classes, they’ll learn a couple of different interpretations of probability. One is if I roll a fair die, what are the chances that I’ll get a four, one out of six, right? That’s the classical interpretation of probability. Then there’s a frequent test, which says that if you have a million people who own Honda CR-Vs based on data from the last 10 years, we know that X percent of them will be involved in some sort of collision. And then, actuaries would use what they call a relative frequency as one of the main inputs to determine what the cost of a premium is to get that Honda CR-V insured. And I’m mentioning the Honda CR-V just because a member of my family just got one yesterday, not because I think there’s an inherent problem with the Honda CR-V. So anybody

 

Max – 00:07:46: You just want to make that clear, right?

 

Jeff – 00:07:48: Yeah. Anybody from Honda, I’m not dissing your car. And then a Bayesian interpretation of probability can be related to the other two things that I mentioned, but it doesn’t need to be because sometimes you have really exotic scenarios where there is no historical data. It’s not just that we don’t have it available to us. It just outright doesn’t exist. So, for example, people who know anything about the history of nuclear weapons and waste products from civilian nuclear reactors know that the problem of how to dispose of nuclear waste is a very contentious political issue. And for a long time, the United States government has wanted to park a lot of that material in a remote area of Nevada called Yucca Mountain. And it’s been the subject of decades-long lawsuits that have gone all the way up to the US Supreme Court. One of the things that come up in that debate is what’s the risk of something bad happening to the people, the citizens of the state of Nevada, if we did use Yucca Mountain as the place to store nuclear waste for the next 10 to 20,000 years, depending on what the half-life of the chemical is. And you can’t use the classical or frequentist interpretations of probability to calculate that because we only have one Yucca Mountain. So a Bayesian interpretation of probability would be done by a subject matter expert, someone who has a PhD in say Geology or Nuclear Chemistry or whatever the relevant field is. And they would apply their knowledge of the relevant scientific principles to that question. And they learn how to get calibrated, sort of like when you drive a car down the highway and the speedometer says you’re going 70 miles an hour, you’re willing to drive in that car and go fast because you trust the speedometer. If you found out that the speedometers of your particular make and model said 70 when you were actually going 150, you probably drive the car. You trust the car because you know, you can count on the speedometer. It’s been calibrated. What happens in risk management, whether it’s environmental risk management like Yucca Mountain or information security risk management, is we’re using uncalibrated speedometers that don’t even work with numbers. So we ask human beings who might be super knowledgeable about their domain, such as tax surfaces on a system, to give their opinion about how vulnerable the system is to attack, but we have no idea how good they are at quantifying their own uncertainty. And so they might say if forced to give a number, well, I think it’s between 50% and 80%. But then you give them a calibration test to find out how well they do on things where we actually do know the correct number and they don’t. And you find out that their estimates are accurate between 10% and 30% of the time. And that’s not the way that any organization should be making decisions, just like you wouldn’t drive a car that had a speedometer that was only accurate 10% to 20% of the time. The good news is that it’s relatively easy to train people on how to do this. I’ve personally done this for multiple Organizations, and 90% of people are trainable. The other 10%, we love you, but you just don’t get to provide an opinion that matters on these important topics. But for the other 90%, we can train you usually in five hours on how to give these calibrated estimates such that if you gave us a hundred calibrated estimates that you were 90% confident in, roughly 90 of those 100 estimates would contain the correct answer and roughly 10 of them would be wrong. And so that actually addresses an important point, which is that for people who do risk management, it’s very rare for Organizations that do ERM to measure the performance of ERM specifically to measure how often were their risk forecasts correct. But that’s exactly what you should do. It’s not just that you should certify someone initially as a calibrated estimator; you should track their performance over time. And if people start to trend downwards, either they need to go get recalibrated, or they get the-qualified, and they don’t get to provide the estimates anymore. And again, it’s nothing personal. It’s just you’re making decisions that can affect millions or billions of dollars for an organization. You should be reliable. And if you’re not, you shouldn’t be involved in that decision-making process.

 

Joel – 00:12:20: Certainly. Yeah. And I love the way you characterize that. And I think that takes away a lot of subjectivity, which is really important in these numbers. Max going back to the discussion around Bayesian networks and so forth. I didn’t know about those topics until last year. I spent some time in the domain of automated decision-making of AI. And that’s all about that. There’s all these Markov Chains and Graph Theories where you string together all these decisions. And so one of the things that I guess, coming back, Jeff, that I learned from that is that AI is really good at executing some of this complex math. The real challenge is characterizing the objects. And that’s where the real trick comes in: not applying some of the Math behind it. So I think that’s another area where AI has been advancing with machine learning and some of the other areas. AI has gotten a lot better at characterizing these end objects that we’re going to be ranking risks on. So, do you see the AIs being used to come in and help organizations quantify their risk either through executing some of that decision science or through characterizing the individual risk components?

 

Jeff – 00:13:27: I would say I see machine learning doing both of those things. So you’re spot on. And I don’t claim to be the world’s expert on AI or Machine Learning, but I do know that, as you said, one of the things that a lot of approaches to Machine Learning use is Bayesian belief networks. And you also mentioned Markov Chains, which is another formal concept. And you’re absolutely correct that those things automate math. They have to have inputs to work with, what Bayesians might call priors. And so you can have informative priors or uninformative priors. Informative priors are better because they’re connected to the real world. And so they put you in a better starting point. But the one thing I don’t think that’s going to be solved anytime soon, meaning taking humans out of the equation, is how to structure the Bayesian belief network. Someone’s going to need to decompose a risk scenario into its elements. And then the Machine Learning algorithms can go to work on applying the math and propagating the numbers through the different elements of that risk scenario.

 

Joel – 00:14:39: That’s really fascinating. I mean, nerd out here for a moment. I saw an article the other day where generative AI is now generating the use cases in the training data sets to train other forms of AI. So, I imagine generative AI might be able to generate some of these belief networks in the near future.

 

Max – 00:14:57: That’s the key, right? You know, the question that comes to my mind is, is there such a thing as a synthetic data set, so we can teach some of these networks to get the right kind of simulation going? But I agree with Jeff’s point, right? How you construct that simulation in the simplest way, what if scenario, there’s thousands of them, but man, it would be something. I don’t know if Jeff, if you’ve read anything where people are trying to create synthetic simulations synthetic data sets to try to figure out what are the range of possible scenarios for building a network itself. And that’s where I think we separate things from, with my rudimentary reading, between Machine Learning and a true big-time artificial network, intelligence network, right? So I don’t know if we’ve seen that in our field, but I think that’s where we’re headed next because of all the information that’s available. I don’t know, Jeff, in your work, have you seen anybody talking about these things?

 

Jeff – 00:15:54: I haven’t. And with the caveat that I haven’t had a security clearance that was active since 1999, it wouldn’t surprise me if some government agency somewhere was doing this on the classified side. But in the unclassified world, I’m not aware of anything like that happening today. I would say that if I were running a program and someone came to me and said, I’ve got an Artificial Intelligence program that’s capable of populating from scratch a Bayesian belief network to a risk scenario, I would raise my eyebrow and sort of a Mr. Spockian fashion. You’d be very interested. But I would also take it with a grain of salt. I would go into it assuming that if we do this simulation a bunch of times, that sometimes it’d be kind of like playing around with ChatGPT. Some of the times you get pretty interesting, plausible-sounding outputs. And other times, you’re like, no, that’s just not right. And that’s kind of what I would expect today. It wouldn’t shock me, though, if I don’t know when, how many years or decades out, it wouldn’t shock me in the future if, you know, we had more confidence from the start depending on how quickly the field as a whole matures.

 

Max – 00:17:06: Yeah, I mean, it reminds me of trying to create N number of scenarios within a video game, essentially. There’s a lot of fascinating video games out there, and there’s multitudes of scenarios, as many as players, and then you permutate those, right? That’s literally billions of possible scenarios. So, I can’t imagine if somebody is not trying this within an information security domain. I mean, I would hope they are, right? But yeah, I think to your point, Jeff, we’re taught to trust cars based on a dashboard. At some point, I think we’re gonna be led to trust a network, an artificially intelligent network just because it is, and it’s a black box, right? That’s how I kind of feel about Artificial Intelligence networks at this point. It’s kind of a black box to all of us, to some degree.

 

Joel – 00:17:55: Absolutely. I think the other angle of this, we’ve been talking about using AI to help us solve some of the calculation sides, but some of these things that we’ve already got the quantification down to as a science, as a process or technology that’s running traditionally, now we move it to AI. What does that do to that risk profile?

 

Max – 00:18:15: So some of these inputs, Jeff, right? Like, I’m thinking if we had a sound pro, even the terrible math that we do on a spreadsheet with orange plus red equals green, or the other way around, right? If there was a way to implement AI into existing equations, what would that actually look like, right? How would that actually work? And what are some of the things that it could produce for us as a field within information security?

 

Jeff – 00:18:42: Well, the big thing is that when you’re working with quantitative methods, you’re able to automate everything. So I mean, to use a really simple example, imagine a spreadsheet where you have a number in column A, a different number in column B, and then column C is a calculated field. It equals A plus B. And you did that for a million rows. That’s really simple. But the spreadsheet can spit out the values in column C in a second, whereas a human, it would take a really, really long time, more than I ever even think about. And that’s a really simple example. Obviously, when you’re dealing with something like annualized loss expectancy, it would be one formula. Another would be risk reduction per unit cost. That would be a different formula. You’re going to have a lot more columns and it’s going to be a lot more complex arithmetical operation. But at the end of the day, it’s still just a formula. And the spreadsheet does that just fine. So what adding AI into this would mean can mean a few things, but what comes up for me is simply automating that computation piece and then having it automagically visualized in a dashboard that is easily consumable by a human.

 

Max – 00:19:53: Yeah. So to me, it means something a little bit more advanced. I mean, I hope, Joel, I don’t know how you feel about it, but I hope it’s more than automation. If there is such a thing. I was talking to one of my other friends. he’s in the medical field, and he was telling me, man, there’s this thing that just assembles all the calculations and runs all the math theorems based on inputs. But I mean, we’re talking about things that are over the hood a little bit further out. I would hope that somebody is experimenting with this within our field. I believe they are. It’s just a lot of it is unfortunately proprietary right now. But I think over the next couple of years, we’ll start to see just like OpenAI. has opened up a few things. I think we’ll start to see either the government or some of these larger institutions, Microsoft, they’ve got to open it because a big part of managing risk is interoperability. Like everybody’s got to be talking the same language, or else we’re going to be recalculating. It’s kind of like getting a score from another scanner. If it’s not interoperable and open, we’re going to be rerunning those numbers all over again. I don’t know how you guys feel about that, but I think that’s going to be critical if we do any kind of black magic math on there and just put the label AI on there, right?

 

Joel – 00:21:07: Absolutely. I think I love interoperability. That’s a key ingredient. And I think that the thing that’s going to be true is as our environments become more dynamic because AI adds a new level, when I talk about managing user AI, I’m talking about the individual widgets we’re managing using AI; it’s going to rapidly change the risk posture of each of those individual items. So, the risk management structure is going to have to keep up with the pace of change. And so I think that’s going to be an important dynamic to add to this.

 

Max – 00:21:37: Yeah, I would think, Jeff, like right now, the vernacular that I think is in your book is radically different from the traditional risk management. So when we layer in anything artificial intelligence machine learning, where things are a little bit more dynamic, you get a new range of algorithms. I think it can co-create a whole new field within risk management, right? Like, how do you apply this new science into not only information security but also into risk management? I don’t know if you see that evolution, Jeff, but man, that’s what I’m kind of seeing out there because we’re still trying to grasp the basics. And I think we’re just going to collide with something that’s brand new in the market, right? With all the hype that’s around AI right now.

 

Jeff – 00:22:22: I might have a slightly different viewpoint than you. I might not, I’m not sure, but one of the things I try to do in my book is act as an interdisciplinary translator. InfoSec has been trying to do risk management since I don’t know, let’s say, the 70s, but risk management as a holistic discipline has been around for roughly a couple hundred years, starting in the finance world. A decent portion of my book is translating how to apply these methods from other sub-disciplines of risk management to the InfoSec and the privacy world, methods that seem to have nothing to do with us. And it’s not that I think that I’m super smart; it’s just that most people don’t have the right background to be aware of these other methods and then show how they might be relevant. And so when I think about applying AI to what I just said, I would say either we’re going to, as a discipline, continue to use the same techniques for identifying risk, for measuring risk, or we’re not. And if we are, then I think AI’s role would be largely automation on steroids. If we’re not, then AI could help us to discover new techniques that are not currently being utilized. That’s kind of how I think about it.

 

Max – 00:23:45: Jeff, and I think that’s where our view is different. It’s cool, right? Because what I’ve been reading, I’m not an AI practitioner either, but it’s the ability for it to generate the best decision based on multitudes of available algorithms out there. That’s how I kind of think about it. Now, I don’t want to get into the definitions of it. Obviously, I’m not an active practitioner here, but I feel like there’s something more that the community is working on and it’s going to impact core risk management across the board. And I understand we’ve got risk management, then we’ve got cyber, right? And you’re bringing in some of these additional expertise from other fields, which are excellent, but I just get the feeling that what we’re dealing with here is entirely different.

 

Joel – 00:24:30: It is. Going back to that thread, looking at it from the other angle, what’s changing in risk management is the nodes. We’re talking about the management using AI, but I saw a statistic where it was showing or a study that was showing they were measuring the decision noise coming from a particular process. And when it was a human process, the noise associated with it, or the error rate, was actually higher than AI because it was more consistent when we basically took that function and moved it to an AI function. But the problem is it’s non-deterministic. Every once in a while, you’ll get a decision that’s way off the charts. So maybe your average noise is lower, but the upside is very high, and that could be very dangerous. And so being able to keep up as these components move out to this new risk profile that’s different because of that non-deterministic, that’s the thing I think is changing a lot.

 

Max – 00:25:22: I think that’s the operative word, right? Essentially being non-deterministic, right? Because we’ve seen these crazy stories where somehow this AI bot went racist, right? Like it went off to the deep end. And we gotta wonder like, okay, there’s safety concerns, there’s ethics concerns. Those are all elements of some sort of risk exposure that are not security-related risk exposure. How the heck are they measuring that if it’s completely non-deterministic, right? And I think that’s very interesting because that is the element we haven’t brought into information security yet. I mean, at scale, very open. I just haven’t seen it. And that’s where I’m pretty excited about because we’ve seen risky behaviors with chatbots. How are they countering that if it’s human safety? That’s what I’m interested in. And I think, Jeff, I don’t know if you saw this, maybe Joel, you saw it, but I think NIST released a paper. They called it an Artificial Intelligence Risk Management Framework. And it was all talking about, did you see that, Jeff? Did you read that?

 

Jeff – 00:26:33: I haven’t read it yet. In fact, it may have been you who sent me the link to it, but I’m aware of it. It’s on the reading list.

 

Max – 00:26:38: Yeah, really, what they’re talking about there is really more focused on ethics, controlling this thing somehow in a fair manner so it doesn’t damage the public, right? Because if it’s running out there, that’s one case. But yeah, I just find it very fascinating because I think we’re going to see techniques to control this AI thing that are going to be either part of risk management overall, or they’re going to be a new field altogether. Call it human safety. Call it what you want, right? So, I see that as an evolution of risk management.

 

Joel – 00:27:12: I agree with you completely. And I’m trying to figure out how we bridge the gap. And that’s where, Jeff, you’re the expert in this space. Before, we had AI-driven risk management, but now we’re incurring AI risk. So one of the things I think about is the black swan. I know that’s been a risk management term for the really bad thing that could happen. Do you think, based on the deterministic side of things or the non-deterministic side, that we’ll see a rise in black swan events, and they need to be calculated in your current risk calculations more?

 

Jeff – 00:27:42: I want to say no because, by definition, they wouldn’t be black swan events if they were to suddenly become more common. Maybe they’d be gray swans. I don’t know. But I did want to say that I do agree with both of you that there is this ethical, whatever you want to call it, whether it’s risk management or something else, but there is an ethical debate to be had about a risk to society. It’s different. I don’t even feel like I have the right set of words, but it’s a different dimension that transcends the way that we typically think about risk management. There’s a big difference between doing calculations versus making decisions, and youth is maybe not the best metaphor or analogy. Think of network intrusion prevention systems in the late 90s for those of us who were working in the field at that time. They didn’t tend to stay turned on for very long as a network intrusion prevention system, because what they ended up doing was they turned into firewalls that launched denial of service attacks on their own networks by blocking everything. They weren’t making good risk management decisions. Now, I wouldn’t call that an AI. So like I said, it’s not a perfect metaphor, but I use that as an example. We’re going to need as a society as a whole, but then individual organizations to figure out how are we going to ever get comfortable with AIs making decisions for us. What kind of guardrails do we even have guardrails? And if we do, what are they? Do we say, well, we’ll allow AI; I’m just going to make up examples. We’ll allow AI to decide whether or not to keep the internet turned on for an organization, but we won’t allow AI to decide whether or not to amputate a human limb if you’ve got an AI-driven robot assisting in surgery. Totally different scenarios with totally different human outcomes. I don’t even begin to claim to have the answers to those, but what I do know from my reading is that there’s this little-known discipline called Risk Communication. One of the big takeaways from that discipline is that the way that risk management professionals think about risk has almost nothing whatsoever to do with the way that the general public thinks about risk. So, as risk management professionals, we think about what’s the probability or frequency of a bad thing happening. And if it did happen, 

 

how bad would it be? What we call probability and impact or frequency and impact. The average human being, regardless of culture, religion, language, or anything, that’s not the way that they think about risk. What they think about are things like, is it fair? Who decides who brings the risk to me? It’s one thing if I get lung cancer because I choose to smoke a pack of cigarettes every day. It’s another thing if I get it because someone constructed my home with asbestos and didn’t tell me, and I get whatever you get from asbestos. So that’s just one example. But you know, there are things like dread is the bad thing, something that is dreaded, like Toxic Waste is something that people dread, right? Whereas if it’s a mundane, familiar risk, whatever that is, they don’t want it to happen. But there isn’t that sense of dread. Is it considered morally relevant versus morally not relevant? Who benefits from the risk, which might be different from who decides. So if there’s a risk scenario where, say, I’ll go to Toxic Waste again, some company benefits by pouring Toxic Waste into the river because they get to cut costs, then they make a lot of money. But all of the downside of that risk comes from the people drinking polluted drinking water who might get cancer or die or whatever. And so that’s a different dimension. And there are a list of these heuristics or factors that have been studied by clinical psychologists. They’re well-understood and they predict with a high degree of accuracy how the general public will respond to different risks. And so it’s easy to imagine public relations experts coming up with competing ad campaigns to weigh the public to either accept or reject a greater role for artificial intelligence. And the way they would do that would be to pull on all those threads I just mentioned. So they might come up with some scenario where, well, the ex-hospital had an AI-managed robot that cut someone’s leg off, and now they’re handicapped for the rest of their life. And people can imagine that in their heads. It’s not so scary that it’s unimaginable. It’s memorable. It’s not something that they have control over because the patient was sedated. And so it hits all of these buttons and that’s going to increase the general public’s anti-artificial intelligence outrage. Then, on the other hand, you’d have people developing an ad campaign arguing for artificial intelligence, and they would try to hit all of those points in the reverse. So they would say the people who are bringing you the risk are people that you can trust, not because we say trust us, but because we have measurable data and we’re not asking you to trust us. We’re asking you to hold us accountable. This is not something that is dreaded. It’s actually very familiar. You’ve already been exposed to risks that are not identical, but they’re qualitatively similar in ways A, B, and C. And so you would go through this, and that’s exactly how the ad campaigns would play out. Whether they consciously use the discipline of risk communication or not, they’re going to hit on all those same buttons.

 
Max – 00:33:16: Man, there’s a lot to unpack there, Joel. I’m gonna let you go first, man.


Joel – 00:33:20: I mean, I’ll say that you’re hitting on something that I find very important is talking about public perception. And one, I’m fascinated by it because of the human experience, and it goes beyond cyber. But what I’m telling people from a cyber perspective is that AI impacts human lives. You know, we are here about the displacement of jobs. It also has an environmental impact. Right now, it costs a lot of energy and pollution to train every large model. And as that becomes a negative draw, what you’re gonna find is hacktivist. You’re gonna have people attacking companies or AI systems potentially because of these environmental and public perception things. So there are cyber risks associated with it, not just ethical considerations.

 

Max – 00:34:00: Yeah, I think there are going to be a lot of moral, fairness, ethical, all those kinds of questions. But I think the scenario you painted, Jeff, we’re kind of already starting to see that. It’s very popular. There’s a set of professionals in the Artificial Intelligence community that are saying, stop, don’t go further. Usually it’s the cybersecurity professionals that are saying this, right? But man, I’m seeing the opposite here where the people that are most knowledgeable about this are hands-on; they’re actually saying, stop, don’t proceed. It’s a high risk, right? A regulation I’ve never seen. So many professionals ask for more control, ask for more regulation because there’s a ton of risk all across safety, moral, ethical. And then, of course, within cyber, man, I can only think of there’s so many different ways to manipulate this thing or to leverage it as well in the military, Jeff, what we call it as a force extension, right? And I think that’s why we’re so excited to do the work for us, right? Adversarial networks, things like that. But yeah, I think there’s going to be a big play in the human element trying to convince the mass public that our AI is the safest one for whatever task that it needs to do. But that’s what it reminded me of, right? It reminded me of from a cyber perspective or from any perspective, we’re going to have to convince; somebody’s going to have to convince the masses. Man, that is what reminds me of your kind of mentioning how, you know, a robot cuts off somebody’s arm, right? Because that’s a big risk to the FDA.

 

Jeff – 00:35:32: They go in to get their tonsils removed, and they wake up, and they don’t have a left foot or their entire leg is gone. We already have; it’s fortunately very rare, but we already have problems with human surgeons cutting off the wrong thing. So, just imagine if it’s a robot that does it. Two stories have been in the news in the last week or so that seem relevant to this part of the conversation. The first is that I don’t remember the guy’s name, but he’s called the godfather of AI at Google. He just left Google and has been in the news because he left Google precisely because he now opposes the AI work that he was doing. So he’s an example of probably exhibit A, the AI professionals saying, hold on a second, we need to have a bigger conversation. And then the other example was, I just read this in the news a couple of days ago: there was some woman who got a deep faked audio phone call that was supposed to be from her daughter saying that she had been kidnapped and was held hostage for a ransom when in fact her daughter was not kidnapped, was perfectly fine. And when the mom got the deep fake phone call, she mentioned it. I hope I’m telling the story right to her husband. And so then the dad called the daughter, and she’s like, what are you talking about? I’m upstairs in my bedroom. But in the meantime, the mom had spun up the media law enforcement, and it was a very convincing phone call. So, I mean, just think about it. A mom knows her child, right? Yeah. Grown woman and the daughter, I think we’re in her early twenties. So she knows her daughter. She was convinced that her daughter had been kidnapped. And was it an AI that placed the call? Probably not. But that’s just an example of deep fake technology. Someone could say, now imagine a malicious AI doing stuff like that. And the broader concern that I’ve seen cited is that AI is going to fundamentally cause a conversation or a question about how do humans know what is real anymore with all these really convincing deep fakes. So I mean, that’s just two examples. Yeah. And I don’t want to be all doom and gloom, but it’s definitely a whole new territory that we’ve really not had to think about a whole lot until… very, very recently.

 

Joel – 00:37:42: Absolutely. Well, Max, when you talk about that, you’re talking about this person saying we should hold-off. That’s what I think about if the people, the practitioners, are cautious, hold off, and the attackers and maybe world governments that have different opinions about ethics don’t. I mean, what are your thoughts on that? How do you balance that?

 

Max – 00:38:03: Man, I think people that are in the know, they’re already working up national policies. I don’t know if I mentioned this before on this show or with somebody else, but China pushed out its AI policy from a national security standpoint. I think the statement they made is do not subvert the government because they know its capability to entirely disrupt the government, to entirely disrupt a society in many different ways. So, yeah, I think if we as security professionals don’t get a hold of this and start to leverage it in our profession and also in our businesses. it’s just somebody else on the other side is using it for bad intent. You know what I mean? So I kind of see it like a weapon, right? Like we can learn how to use them, and they can also hurt us. But that’s how I kind of view it right now when I see world governments making very bold statements against the technology. Who says do not use this to subvert? Usually, that’s a statement we would see out of a nuclear weapon thing or some sort of other catastrophic weapon that could cause that level of harm.

 

Joel – 00:39:10: Absolutely.

 

Max – 00:39:12: I’m going to ask this question to you, Jeff, and Joel; maybe you can chime in as well. But for our audience, we talked about a lot of different concepts here. We talked about your book, Jeff, which is fantastic. I encourage people to read it. But do you think you’ll be updating that, Jeff? And when some of these new methods and techniques come out, how do you think that will evolve with the Society of Risk Management? Because this is a brand new item. I know there’s a lot of unknown, but I think the field itself will expand, is what I’m thinking based on what I’m seeing. But how do you feel about that, based on that? You think it will expand, or more like you think it’s pretty good where we’re at right now with the quantification side of things?

 

Jeff – 00:39:54: I think if I think about this too long, my head’s going to explode. I’m just trying to get the first edition of the book finished and published, and it definitely will not be discussing anything that we’ve talked about on this call. So it might be out of date the day that it hits the press, and I’m completely okay with that. We’ve got a lot of catching up to do as a field. And so I’d be perfectly okay with people getting caught up and embracing quantitative methods and being kind of head in the sand about AI, while we as a community collectively figure out what the standard of practice should be and then make any updates to a second edition.

 

Max – 00:40:35: Yeah, I think we do often get excited about different things and fundamentals. Can’t miss those, right? Some of these basic things. So with that, Jeff, I wanted to thank you, man, for just coming on the show, talking to us, getting this recording going. It’s a fun conversation, but man, we just, we really appreciate you just kind of riffing with us a little bit here.

 

Jeff – 00:40:54: Oh, it’s been a pleasure, Max. Joel, it’s been a pleasure meeting you, and I appreciate the opportunity.

 

Joel – 00:41:00: Yeah. And I look forward to reading your book. I will. Now it may take me a bit of, what’d you say? 145,000 words or whatever.

 

Jeff – 00:41:06: Yes.

 

Joel – 00:41:07: It may take me a day or two to get through that.

 

Jeff – 00:41:10: But it does double as a fantastic aid for insomnia. So, just keep that in mind.

 

Max – 00:41:17: Man, we really appreciate you hopping on.

 

Jeff – 00:41:19: Take care, guys.

 

Max – 00:41:20: Awesome. Take care.

 

Jeff – 00:41:21: Bye.

 

Max – 00:41:24: Emerging Cyber Risk is brought to you by Ignyte and Secure Robotics. To find out more about Ignyte and Secure Robotics, visit ignyteplatform.com or securerobotics.ai.

 

Joel – 00:41:35: Make sure to search for Cyber in Apple Podcasts, Spotify, and Google Podcasts, or anywhere else podcasts are found. And make sure to click Subscribe so you don’t miss any future episodes. On behalf of the team here at Ignyte and Secure Robotics, thanks for listening.

 

 

Ignyte Platform becomes a third-party assessment organization (3PAO), now listed on the FedRAMP Marketplace - Read More

X