β€Ή All episodes

Emerging Cybersecurity Risks

Exploring the Intersection of Cyber Security and AI: Insights from Phil Harris of IDC

πŸ‘‰ The Impact of Ransomware on Cyber Insurance

πŸ‘‰ Leveraging AI in Cyber Insurance

πŸ‘‰ Enhancing Security Assessments in Cyber Insurance with AI

SHARE EPISODE

On this episode of the Emerging Cyber Risk podcast, our guest is Phil Harris, Research Director, Cyber Security Risk Management Services at IDC, the premier global marketing intelligence platform. The podcast is brought to you by Ignyte and Secure Robotics, where we share our expertise on cyber risk and AI to help you prepare for the risk management of emerging technologies. We are your hosts, Max Aulakh and Joel Yonts.Β 

 

Join us as we discuss the fascinating intersection of cyber insurance and artificial intelligence (AI). Discover how the rise of ransomware attacks has influenced the cyber insurance landscape, resulting in higher premiums and demand for accurate security assessments. Explore the role of AI in the industry and the importance of relying on cyber security experts for assessments.Β 

 

The touchpoints of our discussion include:

  • It’s not about asking the right cyber security questions; it’s about who is answering them
  • Is cyber security data trustworthy?
  • Dealing with the challenge of people gaming questionnaires
  • How organizations can leverage internal AI to qualify responses to questionnaires
  • The need for a standardized risk evaluation framework across states
  • Adoption barriers to using AI in risk assessment

 

Phil Harris Bio:

Phil Harris has been in cyber security for over 30 years and has held leadership roles at companies like Safeway and Symantec. He is an acknowledged thought leader in the cyber security space and a sought-after keynote speaker on the topic. At IDC, Phil is responsible for developing and socializing IDC’s point of view on governance, risk, compliance advisory, and privacy services for Enterprises, IT Suppliers, and Service Providers. Develop research on business strategies and the impact of relevant service offerings on enterprises. Work with other worldwide and regional analysts to develop a holistic set of thought leadership and actionable research for IT Buyers and Suppliers.

 

Phil Harris on LinkedIn

 

Get to Know Your Hosts:

Max Aulakh Bio:

Max is the CEO of Ignyte Assurance Platform and a data security and compliance leader delivering DoD-tested security strategies and compliance that safeguard mission-critical IT operations. He has trained and excelled while working for the United States Air Force. He maintained and tested the InfoSec and ComSec functions of network hardware, software, and IT infrastructure for global unclassified and classified networks.

Max Aulakh on LinkedIn

Ignyte Assurance Platform Website

 

Joel Yonts Bio:

Joel is CEO & Research Scientist at Secure Robotics and the Chief Research Officer & Strategist at Malicious Streams. Joel is a security strategist, innovator, advisor, and seasoned security executive with a passion for information security research. He has over twenty-five years of diverse information technology experience with an emphasis on cyber security. Joel is also an accomplished speaker, writer, and software developer with research interests in enterprise security, digital forensics, artificial intelligence, and robotic & IoT systems.

Joel Yonts on LinkedIn

Secure Robotics Website

Malicious Streams Website

Max – 00:00:03: Welcome to Emerging Cyber Risk, a podcast by Ignyte and Secure Robotics. We share our expertise on Cyber Risk and Artificial Intelligence to help you prepare for risk management of emerging technologies. We’re your hosts, Max Aulakh.Β 

 

Joel – 00:00:18: And Joel Yonts. Join us as we dive into the development of AI, the evolution of Cybersecurity, and other topics driving change in the Cyber Risk outlook.Β 

 

Welcome to the Emerging Cyber Risk Podcast. I’m your host, Joel Yonts. Today, we have Phil Harris with us, and we’re going to have an interesting discussion about Cyber Insurance and AI and how the two intersect, and what that’s going to mean longer term for corporations and organizations. Also, as always, Max Aulik, my co-host, is here. Max, do you want to take us on a deeper dive on what we’re going to talk about and kick things off?

 

Max – 00:00:53: Yeah, I think the big question, there’s some of the things that we’re always wondering about is where is Cyber Insurance going? How does that intersect, of course, with artificial intelligence? We’ll explore some different types of coverages and things like that, but before we get too deep into it, we also have Phil. Phil is a good friend of mine. Phil, for those who are listening and who are not familiar with your background, tell us a little bit about your background, your experience, and then also currently what you’re doing.

 

Phil – 00:01:22: Sure. So, my name is Phil Harris. I’m a Research Director for Governance, Risk, and Compliance Services at IDC. I’ve been in the Cybersecurity field for over 30 years myself. I’ve held just about every role you can think of, from CISO to Engineer to Architect Strategist Consultant. A good portion of my career I focused on cryptography, and a lot of part of my career focused on building and running risk management practices for organizations. And at IDC, obviously, it’s governance, risk, and compliance services. It’s the people process side of that space. I focus a lot on service providers out there that are helping organizations build and run or even execute risk management, compliance management, or even governance programs for organizations. As part of that, in the last couple of years, I’ve been focused on Cyber Insurance as a topical area for discussion and creating some studies that I found pretty enlightening over the last couple of years.

 

Max – 00:02:22: Awesome. Well, Phil, definitely welcome to the show. And we love your area, obviously being a GRC player. So, let’s dive into some of these things that you have learned. And I’m sure our audience would love to learn these things as well. But when it comes to Cyber Insurance, what are you seeing out there just from a high-level perspective? Where is the current state of Cyber Insurance? Because it seems like every other year, there’s some sort of a lawsuit, or there’s some sort of legal action on the claim side or the insurance side. What are you seeing out there when it comes to Cyber Insurance?

 

Phil – 00:02:54: Well, I think for Cyber insurance, if we look at pre-COVID, Cyber Insurance was like, yeah, let’s buy this. Let’s have it for our organization. Let’s fill out the questionnaire, and we’ll spend a few dollars every year, and we’ll have some coverage, whatever that meant. But it’s in the sort of during and post-COVID time where we saw a dramatic rise in ransomware attacks. So ransomware really became a driver for Cyber Insurers to all of a sudden wake-up and realize that all those policies they had out there and all those questionnaires they had companies answer, for the most part, the questionnaires were responded to, but it’s questionable as if the answers were spot on or accurate or dreamed up by somebody. And so the Cyber Insurers are beginning to realize that those policies they have out there, they’re not sure if they can cover them because they’re not sure the organizations they wrote the policies for can be covered. So they’re dealing with that issue right now. And that, combined with ransomware, just created a dramatic rise in claims over the last few years to the point where Cyber Insurers started upping their premiums. At first, it was 10% a year; you could count on it. Now it’s ramped up to 30%, 40%, 50%, 60% in some cases. And so the Cyber Insurers, I think, for the most part, are scrambling right now. They’re scrambling to figure out what they have done. What have they created? And how are they going to be able to support this?

 

Joel – 00:04:25: One of the things I’ve observed being on the other end of those questionnaires is that in the beginning, like a few years ago, Cyber Insurance underwriter discussion was around just providing enough confidence that you have it under control. And that was about enough. But I’ve seen it mature where there’s real questions, there’s real quantification behind it. Now, it’s been really interesting to see it from that perspective.

 

Phil – 00:04:47: Yeah, in fact, I had a conversation with a woman who was in the Cyber Insurance field for a number of years. She was on the claims side, I believe. She remarked that as a result of ransomware and the war in Ukraine and Russia, Cyber Insurance claims seem to level out in some way, at least for the particular business she worked for, mainly because the questionnaires became more detailed. And so we’re requiring companies to fill out more details and come to the table with more information about their security posture. And so companies were starting to wake up to, okay, I guess we have to figure out how we fill this out and fill it out as accurately as possible. So yeah, you’re absolutely right, Joel. I mean, it’s just been this weird, well, you know what? We’ve been in the Cybersecurity field for so long. It’s not weird. It is not weird by any stretch of the imagination, right? Remember the days when the only way we would get any amount of budget or Cybersecurity protection we were praying for the company to be hacked in some way? So there would be some devastating event that we could finally get some budget and do stuff. Right. So this is not new.

 

Max – 00:05:52: This is not new. Yeah. No, I second that. I do recall, like, okay, you know, there’s going to be some catastrophic event. And then in the military, we always say, hurry up and wait. Right. Like, why isn’t this done? And I think the cyber insurance Industry is in that sort of Wagmeyer, like, oh, we need to hurry up and figure out what to do next, because I would imagine at this point in time, when the questions get more detailed, which is the right thing to do, you’ve got to ask the right question. Right. You’ve got to ask something as basic as: do you have a single sign-on? But the person on the other side that’s responding, if they’re not a practitioner and we have all of these auto-response question-answer capabilities with artificial intelligence, man, I can only imagine the amount of applications they got to get through to get. Hey, are we confident in the answers that are being provided to us by the insurance companies? Because that all ties back to potential payouts. Right. Or did we even size up the risk the right way? Right.

 

Phil – 00:06:52: Yeah, absolutely. Max, it’s interesting you brought up that the Cyber Insurers have these really detailed questionnaires and have to review them and determine if they’re actually factual. My question is, are they the right people to be even dishing out a questionnaire or even reviewing a questionnaire that gets responded to? It’s not their core competency at all. I mean, they may have a Cybersecurity arm that’s part of the insurance company that may be reviewing these questionnaires. But at the end of the day, it’s just not their core competency. And I’ve been a proponent of the Cyber Insurance Industry, just taking a page out of the Payment Card Industry, not to the degree that they create this BMS of a program where it costs billions of dollars to get people certified. I’m just talking about having qualified Cybersecurity professionals assess the companies, producing a result that they know has been reviewed to the nth degree. And so they don’t have to worry about, gosh, you know, do we look at the answers or don’t we look at the answer? Do we trust the answers? They don’t have to think about it. They really don’t.

 

Max – 00:07:53: Yeah, I can imagine, you know, just trusting the validity of data is going to become even more crucial, right? Because right now, with auto-generation of responses, you don’t really know if it’s a human. You genuinely don’t know with artificial intelligence who’s answering those questions. And then, if the other side isn’t an expert to even know, like, okay, I asked for a single sign-on, but you’re talking about some other random acronyms that don’t make any sense. But I’ll just say, yes, you answered it, and I’ll just reduce your premium. Man, I think they’re going to be in a world of trouble unless they figure out if they can even trust the data that’s coming through.

 

Joel – 00:08:33: Yeah. And how do they get into it? I’ve been in a lot of interviews, but do you see third-party sources or technical validations? You see a rise of some of that to back up these interviews and questions.

 

Phil – 00:08:44: I’m starting to see a lot more. I did a survey last year, and I found that there’s a rising wave of companies, Cyber insurers, and organizations having a security assessment done, a deep security assessment done to produce a result. And the results are being used to help drive what Cyber Insurance looks like, the policy coverage, and even the premiums. And as Max and I were talking about, it would be a great idea if we could start relying on reports like that, whether it had a score or whether it had some indication of red, yellow, green, some indicator that the Cyber Insurer could then use to then assign an appropriate premium to that organization.Β 

 

Joel – 00:09:28: I love it.

 

Phil – 00:09:29: You know, I think that that’s the direction of this because, as I said, Max and I have talked about this, and the hurdle we have to go through is the states and how they drive how insurance is done within the states. But we can somehow create a rising tide of Cyber Insurers who really need to get out of the business of being security assessors and leveraging qualified professionals to do that. They could be more creative, and they could also help drive organizations to have better security. Because, as we all know, money is a driver, right? The CFO says, why are we paying a million dollars in Cyber Insurance premiums? Well, CFO, on our assessment, we had yellow on our report, not green, not red, but yellow, which means we’ve got some problems, we’ve got some issues to deal with, but we’ve got to get them taken care of. So I need a budget.

 

Joel – 00:10:22: Certainly. I was laughing internally a little bit because I was thinking that the forward part of this conversation is that, just like you said earlier, nothing’s new. We keep repeating ourselves. What I perceive happening is we get better and better questions, we get better and better at quantifying Cyber Risk. And then AI throws all those questions out the door because it changes the game. I feel like that’s where we’re going to head to this. What do you think that’s going to look like?

 

Phil – 00:10:46: Well, you know, I think, like everything in life, human beings are really, really good at gaming systems, and organizations are no different. You know, if they could gain Cyber Insurance responses so that they can get a decent premium, they’re going to try. At the end of the day, Cyber Insurance is going to pay for it. They’re going to pay for this gaming that will start happening with AI. It’s just going to start happening. Even if the Cyber Insurers decide they want to use AI for this to review the responses. Well, what good is that going to do? The AI system doesn’t have access to the company that just responded. So, the way to validate at this point in time, you can’t trust AI to really help you in this way. You need humans. You need human intelligence right now. I’m not saying it’s permanent, but I’m saying you need human intelligence right now to set the ship in the right direction.

 

Max – 00:11:39: Yeah, I think it’s all about, like you said, following the money, right? It’s all about money. And if we are not able to trust the data, whether it’s manual or human-driven, the AI’s ability to look at that, the confidence is going to be much lower. It could be better. I think there’s some automation that could be applied. Joel, that’s what I’m thinking. I think the insured tech industry, right? That’s what they call it, right? Insurance technology. I think there are a lot of great proposals out there, but the trustworthiness of the data itself without having access to direct company networks, man, it is a really hard thing to solve for insurance companies, I would imagine.

 

Joel – 00:12:19: I like what you’re saying there, Max. And it goes back to something that we talked about a few episodes ago about how to use AI to understand the true state of Cybersecurity in your organization. In that context, it was about enabling a CISO to be more in power. Because I got to tell you, it is a lonely place to be a CISO of a large organization. If you have to sign your name saying everything’s there, I mean, how do you really know, right? And so you’re a GRC guy, Max. As you put more and more AI into your solution and you have more reasonable information about the organization, it should empower you to answer more definitively and maybe empower the insurance providers to be able to have more definitive answers around it as well.

 

Max – 00:13:00: You certainly could. And in my opinion, the data will get better on the company side, but it’s about the incentive structure to some degree as well.

 

Phil – 00:13:09: Yeah, I think we fit up on something. Instead of the Cyber Insurer having to deal with AI, maybe it’s the organization itself that should leverage AI to do a lot of that discovery, a lot of that identification of answers and responses, and validate a lot of this stuff. Because as we all know, Cybersecurity teams are small for the most part, depending on the size of the organization, of course. They have service providers, but they themselves are limited in what they do. And the team within Cybersecurity, they’re busy doing a million other things and having to drop stuff to go and validate responses to an assessment; it’s just going to continue to get worse and worse and worse. So I think artificial intelligence within the organizations, within the cybersecurity organization, could really be a boon and may even negate the need for qualified security service providers to have to do an assessment. It may, I won’t say it will, but it may.

 

Max – 00:14:08: Yeah, I think if you could imagine this thing pointing it internally because now you have access to information, right? That’s the key. It’s not just sample data or false data that it’s learning from; it’s real data, real operational risk data internally. Yeah, the GRC teams are always crushed. So when we look at the whole cybersecurity team, the GRC team gets the last of the two pennies, right? The left. And the work is related to asking good questions and getting accurate answers to those good questions. And I think that’s where we could see a lot of benefits. That’s really the other side of the house, too. That’s what they’re trying to do the insurance companies are trying to ask good questions in order to get good answers. But yeah, I think both sides could benefit from it. Joel, you were going to say something.

 

Joel – 00:14:59: Yeah, the other part that I’m thinking about is as we’re talking about the organization using AI to improve their ability to apply and certify for insurance and conversely the insurance providers being able to better assess an organization, but that’s all about vulnerability. What we haven’t talked about is what AI will do to the premiums from a risk standpoint. That is a gargantuan unknown. So you talk about Phil, the ransomware that changed cyber insurance. What do you think of the AI event that’s going to change from the premium payout perspective?

 

Phil – 00:15:32: Let’s posit a theory here about AI in organizations, right? Let’s just say we can rely on AI to do maybe 80 to 85% of the validation in the environment. And you’ve still got 15 to 20% that need human intelligence to validate the rest. I think that that could end up being a huge call for adjustments to premiums. Organizations will have the leverage, I think, to go back and say, wait a minute, we need to do something different here. So it’s possible there may be a hue and cry from organizations around the world to say, wait a minute, we’re doing everything we possibly can to give you every possible piece of information about how secure we are. You need to come to the table with some rationalization here. I think we could start looking at the tiering of premiums, right?Β 

 

Joel – 00:16:24: Makes sense.Β 

 

Phil – 00:16:24: I think we could do that. But as we talked about, it’s going to require some, is it horrific or torrential or easy change within the state’s state insurance organizations to make those changes, right?

 

Max – 00:16:38: Yeah, and there’s no such thing as a refund in insurance, right? Right? Like, hey, we told you we were doing all these things. You didn’t ask the right questions. We answered the questions incorrectly because you asked the wrong question. And now, here’s all the data. I’ve been doing it for years. Yeah, there’s no such thing as a refund, right? So, I do believe we need some sort of a standardized model across different states. That could be a catalyst, right? We need something to be a catalyst across the states, especially if we start to get large data sets over to the insurance providers, because, you know, they tout their ability to cover risk, at least in cyber and artificial intelligence, that has to be data-driven. We can’t do it on just a few things that always change.

 

Phil – 00:17:24: And I think what complicates that is the inability of insurers to leverage the historical data for Cyber Insurance, right? Because if you think about it, before COVID, ransomware was like, eh, it happens. Certain companies are getting hit; certain companies are not. So it’s not really a big deal. All of a sudden, it grows dramatically. Nobody saw it coming. And now the Cyber Insurers are left holding the bag for the most part, having to deal with that. And so the issue of historical data, so pre-COVID, you couldn’t use any historical data, or maybe you could have, because it was static data that you could count on and look back at. Yeah, they got broken into, and yeah, because they did this, this, and this. But now we just don’t know what the next superattack is going to look like.

 

Joel – 00:18:15: Certainly. I mean, when I think about, you’re talking about the ransomware example, I know there are models out there where we can project how much a ransomware attack would cost an organization in downtime and availability. But that’s historical. If we take it to, for example, a large distribution center, if the main picking system is down, what you have is an inefficient workforce that runs around and does the picking by hand manually, still runs. But as AI and Robotics continue to change this environment, and now your workforce becomes 60% robotic, as opposed to 15% technology enablement, suddenly ransomware shuts down the ability even to have a manual operation because there are no people involved at this point. So AI, as it infuses into every business process, changes the equation dramatically. I would imagine it’s hard to project what that’s going to look like.

 

Max – 00:19:04: I think Joel, based on that because we’ve had to deal with a lot of claims. A lot of the businesses that we support end up buying Cyber Insurance. An event happens. Now, we’re like the recovery team. We’re trying to recover money. And a lot of these clauses are very interesting, the way they’re written. They’re paying on downtime, essentially, as you’re mentioning. I think they’re going to have to sharpen up their underwriting skills in terms of what kind of downtime. Because if you can throw a person, a human, at the problem, that’s not downtime. Clauses are written in a very clever way, right, to not provide the benefit to the person who’s buying the insurance, unfortunately. But I think when you throw artificial intelligence into the mix, your downtime is, essentially, the automation is down. Now, it’s that whole inefficiency cycle. Yeah, I can see a transition from basically risk management to, I don’t know, Black Swan Event Management Companies.

 

Joel – 00:20:02: Well, yeah. I mean, I like what you’re saying there because the other thing I was thinking as we were talking, I wanted to get both your opinions on this, is that there are new types of events. So, I just talked about amplification to an existing one. But let’s talk about the worst case: there’s a decision made by an AI system that costs us human life. Who is responsible? And so is the organization responsible? Is the AI responsible? We’ve had a lot of these ethical discussions, and we’re trying to figure out the legalities of it. But now let’s put insurance into it. How does that change what they pay and don’t pay in that situation? Do you have any thoughts on that?

 

Phil – 00:20:35: Wow. But running through my head at that point is, how could we possibly blame artificial intelligence for it? It’s not sentient. It’s just doing what it’s written to do. And in many cases, we may not know the extent of what it’s written to do. So how could we assign blame to an artificial intelligence or even a machine learning process? It’s just got to come back to the humans that created this thing.

 

Max – 00:21:01: Yeah, the manufacturer, Phil, right? Like, if we’re all using ChatGPT to do a bunch of our workloads for different things, accurate or inaccurate, or any other large language model, the insurance companies love to throw out war clauses. Hey, this was an act of war, so we’re not going to pay for it. Can’t do that with artificial intelligence that’s owned by a manufacturer or owned and operated by the ecosystem at large.

 

Joel – 00:21:27: I’m going to be the antagonist in this conversation. All right, so I’m going to be the person who created and sold you this. And my argument is going to be no one programmed that robot. All I did was create a model that has the ability to learn. And I didn’t give it logic at all. I gave it data. And it learned patterns from the data. And I got the data from you or from someone else. So it’s your data that it was trained on that caused this error, not my model, because my model’s a standard model. So, that would be my argument from an antagonist standpoint.

 

Phil – 00:21:58: That’s a pretty potent argument, actually.

 

Max – 00:22:00: It really is because we were just talking about, all right, if I’m an insurance provider, I don’t have the data, now you give me the data. And I put it through some sort of a model that has some efficacy and validation, and it still creates some sort of an error in a way that doesn’t benefit anybody, right? I think those are some of the challenges I could see in adoption, right?

 

Joel – 00:22:22: I don’t think anybody’s got the answer on this, by the way. But the thing about it is, the insurance people are going to be the ones that are left holding the check if they have to pay out like they did before. So what do you think? Is it going to be a conservative play where it’s just going to, oh, you’re using AI, you get a 50% premium hike, or what do you guys think that’s going to look like?

 

Phil – 00:22:39: I think a lot of it has to do with what’s the experience or what’s the actual results that are being produced here. We just don’t know. So Cyber Insurers can’t just out of the gate say, well, no, we won’t cover it because it’s artificial intelligence. Well, we don’t know how good that artificial intelligence model is. We don’t know what kind of results it’s been producing. We just don’t know. And so to just out of the gate say, hey, we’re just not going to deal with it, they could try to do that. But eventually, it may end up coming down to both sides playing a wait-and-see game to see what actually shakes out. I mean, a perfect example: what if an AI model, because of the data it received, shuts down a respirator in a hospital and causes someone to die? Well, what kind of data in that hospital would have caused that AI to shut down that respirator for that patient to die? So we’re going to see a bunch of artificial intelligence scientists, a whole new field because we need people to actually interpret all of that, go through all that data, go through all the results, and see, will it actually produce the result that we want? And what are the potential unforeseen things like human life? That would be huge in my mind.

 

Max – 00:23:52: It reminds me of how we sometimes work on these certifications. They’re called FDA 510(k) certifications. And essentially, these are medical device certifications that are high-risk, right? Like if you’re using a respirator, it’s gotta be certified by the FDA in this way, at least in the United States. And man, I don’t think I have seen really good guidance from even the FDA on how they plan on managing the risk, just the human physical safety risk of these devices and all risks, right? You end up with some sort of a residual risk can’t get rid of all of it. And a lot of it is covered by insurance, right? And so I don’t know if we have the right answer, but what my hypothesis is, Phil, as you stated, we’ll see an emerging, a brand new field of practitioners that go beyond just regular actuary science, beyond data science, and it’s like a fusion of risk, cyber, legal, artificial intelligence. That’s where my head goes. I don’t actually know how this will get accomplished.

 

Joel – 00:24:53: One of the things that will help, I would imagine, is as the world’s court systems get involved because they’re going to be asked to weigh in on these things, which means it may be very fickle. One day, it may be this, and then the answer may be X, but as a court rules in a specific way, it may be completely opposite the day after, based on that ruling.

 

Phil – 00:25:12: Yeah, I mean, if we all recall way back when, gosh, in the 80s at least, lawyers were having to deal with data theft on hard drives, and so they had no language for any of this stuff. Judges had no language for any of this stuff, so we’re going to see a whole new set of language for this that lawyers and judges and all the legal officials are going to have to start getting used to. So that’s one thing I see. The other thing that I think might be the most practical thing right now, anyway, so that humanity can cut its teeth on AI; we should just have limited use of AI capabilities, right? So, for example, I’m the CISO, I’ve got an AI that goes out and does all my compliance, that just validates everything. For me, that’s a limited-use thing. It’s not taken action on non-compliance yet, but it’s telling me what’s not compliant. And the same thing with medical device certifications. Why can’t we have a limited AI capability that does that review? Gets like the complete answer, a possible answer that humans might, in fact, miss. And then, at some point, we may begin to shift to larger-scale AI capabilities beyond the limited use. The first time I saw a ChatGPT, I was like, okay, that’s Skynet under the covers, right? It occurred to me that now I’ve got this capability, and it’s out there, and nobody knows. I don’t think even the people who created it really know what it’s capable of, and we’re using it. Well, I think let’s lump our corporate data into the damn thing and expect a nice report to come out. Oh, good, I got a nice report. Yeah, but you left all your proprietary data in the ChatGPT void.

 

Joel – 00:26:54: Yeah. Well, I mean, I think that’s, I mean, we’re all risk people here.

 

Phil – 00:26:58: Oh yeah.

 

Joel – 00:26:59: You know how you manage risk, right? You always say what’s the business value versus the risk, and you make a decision. The problem is the business value is so high, and it overrides any risks that we can theorize because it hasn’t been actualized yet, except for a few companies that put some very sensitive stuff into ChatGPT that’s shared with the world that way. Other than that, there hasn’t been a lot of damage yet.

 

Max – 00:27:22: I know we’ve got an attorney coming on this podcast, and that’ll be an interesting question to pose, Phil, which is if we look at case law and all the prior things we have done, when you have something new where nothing has been done, how do you determine something, right? How do you judge that? I think that’ll be an interesting question to pose. But yeah, I think it’s a very unknown space and trying to figure out where to risk people to play because, you know, Joel, you mentioned the business value is very high. I agree. But I do recall a time when somebody said this, I think it was a risk person or a CISO. Somebody said to me a long time ago, he said, you know how we got seat belts? So now somebody had to die.

 

Joel – 00:28:05: Oh

 

Phil – 00:28:08: Well, you know how we got our cybersecurity programs. Well, the company had to be on the face of the Wall Street Journal.Β 

 

Joel – 00:28:15: Yeah. That’s a good way of putting it, yeah.

 

Max – 00:28:19: Unfortunately, right? Like we’re always cleaning up crews to some degree, sometimes damage control.

Phil – 00:28:25: And I wonder too, now that I’m seeing a growth in risk quantification platforms, SaaS platforms that are elegant, they’re not this complex mishmash of algorithms and so forth that as an administrator or user, you’ve got to figure it out. These things are elegant; you can implement them, and all of a sudden, the data comes out. Now that we have that as a nice tool to start relying on, how do we leverage that to really paint a more accurate picture as we can between the business value and the risk? How can we leverage that? That would be helpful if we could figure out a way to have risk quantification sort of be a learning, well, it’s AI, maybe AI-ish, but be able to produce some decent metrics and dollar values that would have senior management go, yeah, there’s a lot of business value there, but oh yeah, over here it says, yeah, but somebody could die.

 

Joel – 00:29:23: I think that makes a lot of sense and getting to that level and quantifying some of those things. It’s still a new space. So hard to quantify some of those, but I like where you’re going with it. Outliers are a powerful thing, too. An AI can give us outlier detections and unsupervised learning is used, which means you don’t have to know the answers. You just cluster all the answers and find what doesn’t make sense or what’s outside. Some of that may be what we need to do in the beginning to understand what’s not normal. Where could we be most exposed because of these outlier situations?

 

Phil – 00:29:55: Yeah, so I think it’s a crawl, walk, run scenario, especially now with artificial intelligence. I think more so than ever, we’ve really got to take that approach.

 

Max – 00:30:04: Yeah, well, I mean, with this artificial intelligence ejection, it certainly feels like crawl, walk crawl.

 

Phil – 00:30:13: Right? Crawl Lock, wait a minute, something new happened. Oh, we’re going to crawl again. Good point, man. It’s a good point.

 

Max – 00:30:21: Well, it reminds me of the conversation we had with Jeff, who was one of our guests. He wrote one of the books on Cyber risk quantifications. I’m friends with another gentleman named Jack Jones, who was out of Columbus and wrote the FAIR model, right? We all know the FAIR model, the loss events, and things like that. And I think there’s a lot more exploration that will happen because there are many different mathematical models. And this artificial intelligence area explores it in a very different way, right? Where it’s providing perhaps the mathematical means we need to come up with a number. But I still can’t see the human decision being disconnected. I think before we get to that, we just have to be way more comfortable. You can gather the data. You can get me a pretty report after you eat up all my sensitive stuff. But I think that ability to make a decision, I don’t know, Joel, how you feel about this, but I don’t know if anybody is truly comfortable with, like, yeah, let’s just let AI make some of these decisions.

 

Joel – 00:31:21: I mean, I think it depends on what we’re talking about. Right. And I know we’ve talked about if human life is involved with it, but we’re doing that today. There’s robot-assisted surgery, AI-assisted because guess what? It’s solving medical problems we couldn’t do before. So in situations where the level of precision humans can’t get to, or the decision-making has to happen faster than a human can react, we’re allowed to make that trade-off. So you can’t put a human in because it would break that ability and hinder maybe massive value to the human race. In other places, if you’re talking about the end of an audit to look at your cyber risk, certainly, a human can review it. So it depends on the speed and ability, I think, of how you insert what I’m thinking.

 

Phil – 00:32:02: Gosh, that’s just another angle to review and look at. So who’s going to come up with all of these, you know, multidimensional angles of which, yes, we should use artificial intelligence here, and maybe we could have the human do the review or in here, you know, like you said, it was the surgical procedures. Maybe we can’t because we’re just not fast enough. But how many other dimensions are there out there? I think, you know, it’s unlimited in my view right now to our it’s only limited by our imagination. But how do we accommodate all that?

 

Joel – 00:32:34: It gets more and more complicated because the one other thing I’ll throw out, and we could just keep going on, is that one of the trends I’m seeing in my AI research is having AI be that extra set of us for AI. So there’s a model that watches the model and provides checks and balances that are trained differently, that has the different risk equations associated with it. It’s not getting simpler. Yeah, just add more dimensions.

 

Phil – 00:32:56: Wow. Use AI to watch them, like having the developer do the security for your systems.

 

Max – 00:33:05: But I think that’s really what it’s going to come down to, right? I mean, I look at it as simple as source code analysis to some degree. I remember one of the guidelines in the Department of Defense many years ago was, yeah, you should do a manual review. Now we’ve got billions of lines of code. There’s no way we’re doing manual reviews of anything. So I think in that same way, the capability would have to be developed. That is a real validation capability, right? On the other hand,

 

Phil – 00:33:32: I think one thing that just came to mind from, and I actually thought about it a while ago, was, and I did this in the context of risk quantification, right? Cause everybody’s building their own risk quantification thing. Somebody’s using a FAIR model; they may be twisting it a little bit. Somebody’s been working with MIT, and they’ve developed that model. And who knows how many other models there are out there? I go back to my days in cryptography. If you don’t have an outside expert come in and review how that cryptography has been implemented, you don’t know that it works or that it could be subjected to some form of attack or could be destructive in any way. So, having said all that, I think for risk quantification, I think there needs to be some level of attestation done for risk quantification models. And I think we should extend that to artificial intelligence and have some sort of attestation for artificial intelligence models. And maybe at some point in the future, we may grow beyond the need for it because we’ll get so good at it. But right now, who knows who’s good at it and who’s not?

 

Joel – 00:34:37: Yeah, I think you’re onto something there. That makes a lot of sense. And it comes back to where we started the conversation we were talking about before we matured the insurance process. The security leader who had the most confidence is the one who got the smallest premium. Is that where we’re heading back to again?

 

Phil – 00:34:51: Yeah, yeah, something like that.

 

Max – 00:34:55: Well, Phil, thank you so much for this conversation. I think this was a fascinating conversation. We’d love to have you back on this. But before we leave, I’m always curious about the analyst space, as you work at IDC. Man, I could see an emerging box for analyzing this whole capability. Do you see that within your firm, or is that a strong focus within the analyst community on just taking a look at artificial intelligence and the upside and the downside of artificial intelligence?

 

Phil – 00:35:25: Yeah, absolutely. It’s definitely a new and emerging area that, if an analyst’s company is not looking at it, then too bad, so sad. But yeah, we’ve got to start looking at it ourselves and provide some relevant data to support any positions. And I think that’s one of the reasons why I love IDC so much is that anything we do is supported by data. And it’s data that the analysts are collecting. Now, God forbid we use artificial intelligence to collect our data. Then we’ll be back to the future again, right? But I think you can count on IDC that if we do have something to say in that space, it’ll be backed up by relevant, timely, and responsible information.

 

Max – 00:36:06: I think that’s the key, Phil, responsible, right? I think we’ll hear that a lot in the future. It’s just as a theme, either ethical AI or responsible AI. We’ve had so many folks that came on this show, and the route comes to trust, responsibility, and ethics because this is how powerful this thing is.

 

Phil – 00:36:24: You know, I’m engaged in a study right now where one of the questions I asked vendors was, how do you demonstrate trust as an outcome? I think this is an absolutely appropriate thing to be expecting from the AI space. How do you demonstrate trust as an outcome of the use of this model, right? You hit it on it, Max. Thank you.

 

Joel – 00:36:43: Yeah, that’s huge. Binary question. In the next year, year and a half, is AI going to be a friend or foe to the Insurance Industry? What do you think?Β 

 

Phil – 00:36:52: I think yes. I think yes; I mean, there’s going to be some good stuff. There’s going to be some bad stuff.

 

Joel – 00:37:01: All right, that’s got to be part two to crack open that. Yeah, so no.

 

Phil – 00:37:06: All right, I’ll see you in a few years.

 

Joel – 00:37:09: Perfect.

 

Max – 00:37:12: Emerging Cyber Risk is brought to you by Ignyte and Secure Robotics. To find out more about Ignyte and Secure Robotics, visit ignyteplatform.com or securerobotics.ai.

 

Joel – 00:37:23: Make sure to search for Cyber in Apple Podcasts, Spotify, and Google Podcasts, or anywhere else podcasts are found. And make sure to click Subscribe so you don’t miss any future episodes. On behalf of the team here at Ignyte and Secure Robotics, thanks for listening.

 

Ignyte Platform becomes a third-party assessment organization (3PAO), now listed on the FedRAMP Marketplace - Read More

X