‹ All episodes

Emerging Cybersecurity Risks

Opportunities and Challenges of AI in Cybersecurity with Phil Agcaoili, Entrepreneur and Former CISO at Elavon, Cox, and VeriSign

👉 What are the market forces in cybersecurity today?

👉 The equation for AI impact equals scale into impact over replaceability

👉 How will AI impact Cybersecurity?


Welcome to this episode of the Emerging Cyber Risk podcast, brought to you by Ignyte and Secure Robotics, where we share our expertise on cyber risk and AI to help you prepare for the risk management of emerging technologies. We are your hosts, Max Aulakh and Joel Yonts. Our guest today is Phil Agcaoili, who is a recent Entrepreneur and an ex-Chief Information Security Officer at Dell and CISCO.  

Together, we discuss the impact of AI on cybersecurity, compliance, and the workforce. Phil shares valuable insights on aligning emerging risks with technological advancements with protection software. Phil is an expert in cybersecurity risk management and shares his experience and knowledge on the copilot system. This tool helps organizations quantitatively measure their cybersecurity risk. Don’t miss out on this informative and engaging podcast!

Some of the Topics We Discuss Include:

  • Market forces in cybersecurity today opportunities and challenges
  • The equation for AI impact equals scale into impact over replaceability
  • Impact of AI on the CISCOs World
  • Is the future of AI a hacktavist vs. developers game?


Phil Agcaoili Bio:

Phil Agcaoili is a trusted technology and cybersecurity leader. He is a consultant to companies like Bain, BCG, and McKinsey. He is a 4-time Chief Information Security Officer at Elavon, Cox Communications, VeriSign, and SecureIT, and has shaped security at US Bank, GE, Alcatel, Scientific-Atlanta, Cisco, and Dell.

Phil Agcaoili on LinkedIn

Get to Know Your Hosts:

Max Aulakh Bio:

Max is the CEO of Ignyte Assurance Platform and a Data Security and Compliance leader delivering DoD-tested security strategies and compliance that safeguard mission-critical IT operations. He has trained and excelled while working for the United States Air Force. He maintained and tested the InfoSec and ComSec functions of network hardware, software, and IT infrastructure for global unclassified and classified networks.

Max Aulakh on LinkedIn

Ignyte Assurance Platform Website

Joel Yonts Bio:

Joel is CEO & Research Scientist at Secure Robotics and the Chief Research Officer & Strategist at Malicious Streams. Joel is a Security Strategist, innovator, advisor, and seasoned security executive with a passion for information security research. He has over 25 years of diverse Information Technology experience with an emphasis on Cybersecurity. Joel is also an accomplished speaker, writer, and software developer with research interests in enterprise security, digital forensics, artificial intelligence, and robotic & IoT systems.

Joel Yonts on LinkedIn

Secure Robotics Website

Malicious Streams Website

Max – 00:00:03: Welcome to Emerging Cyber Risk, a podcast by Ignyte and Secure Robotics. We share our expertise on cyber risk and artificial intelligence to help you prepare for risk management of emerging technologies. We’re your hosts, Max Aulakh.

Joel – 00:00:17: And Joel Yonts, join us as we dive into the development of AI, evolution in cybersecurity, and other topics driving change in the cyber risk outlook.

Max – 00:00:27: Phil, welcome to our podcast. Thank you for joining us today on this episode of Emerging Cyber Risk. My name is Max, and this is Joel. So, Phil, you and I, we’ve known each other for, like, three to four years. You’re an entrepreneur. You were a Co-Founder of VeriSign, and then you were a CISO. And now I know you’re a business leader. I’ve introduced you to other business leaders to become a CEO, potentially, but man, just tell us about your background, your story, where you were and kind of where you’ve been, and a little bit about yourself. Man, so share a little bit about your background.

Phil – 00:01:02: Sure. So, I’m actually the immigrant American dream story. Came here with my parents in the 1970s under the age of six. My parents are also on their own, the American Dream story. Grew up in upstate New York, went to college at Virginia Tech, and graduated from Rensselaer Polytechnic. I started out as an Aerospace Engineer, so I wanted to be a rocket scientist. Actually, NASA’s freshman year at Virginia Tech sent me a rejection notice, so that kind of changed the trajectory of my career. And so I thought I was with Star Trek and wanted to build spaceships. And when NASA said no, it’s kind of like, what else can I be doing with myself? And I switched over to mechanical engineering. And in the 90s, from what I’ve gathered, there were a lot of electrical and mechanical engineers that went into IT. And I was really lucky because, in the 1980s, I went to a pretty progressive grade school, K through 12, in Upstate New York. In fifth grade, they pulled a bunch of us aside, so once a week in a math class, they started teaching us basic programming. So this was like 1980. In fifth grade. And so I lucked out. I learned basic programming on Apple IIe and a TRS-80. And a couple of years later, I picked up Turbo Pascal. Fast forward. If you go to college as an engineer, coding is kind of like a fundamental course that you have to take. And when I went to college, it was Fortran 77. I had basic programming. I had Turbo Pascal. I picked up Fortran. And in college, I picked up C. Objective C++. And when I first got out of college, different people basically said, hey, you know how to code? Hey. You went to Rensselaer Polytechnic. You guys got a grant for $20 million for UNIX. So you’re a UNIX admin, and you know how to code. You must know this Internet stuff. And I graduated from college in 1993. That was the year Internet e-commerce opened up for business. I was fortunate 


that I graduated the year the Internet basically opened up for Commerce. I worked as a Coop summer intern for General Electric, General Electric divisions that I was in. One of them was Aerospace. Got bought by Lockheed-Martin. And so I was really at this nexus of the largest company in the world. GE, at the time in the 1990s, was a Fortune 1 company when I worked there. So I was there at the tail end of Jack Welch and his Business Leadership Management experience, and I got a chance to be part of some of those programs that got to meet Jack and fast forward GE basically, I helped put up David Letterman’s website. GE moved me down to Atlanta. I helped defend the 1996 Olympics. This was like in my early 1920s 21-year-old kind of thing. So I was pretty young again, fast-forwarding. We had given our business plan on Internet Security to GE Corporate, and GE said no, we don’t want to do that. This was circuit 1994 and 1995, and they said no. At the time, my VP of Sales looked at me and another guy and said, hey, what do you guys know? 90% of the planet doesn’t know anything about Internet security, how to defend companies, or how to break into companies. So we started our own company. It was actually the company was called SecureIT. We co-launched basically with another company out of Atlanta called Internet Security Systems. So, between the two companies, SecureIT and ISS, we are probably the reason why Cybersecurity is huge in Atlanta today. I still tell people that, like the last time I looked, there were 17 companies that came out of SecureIT, 14 companies that came out of ISS alumni. So I’m kind of that mix. So did all these startups. Sold two in a row. And I realized when the dot-com bubble failed, I should go back into corporate America. I wanted to get closer to CIOs and CEOs to get a better understanding of their buying decisions, their buying habits, why they’re doing what they’re doing, and understand the corporate world a little bit better. Different than when I was at GE, and I started becoming a Chief Information Security Officer. Fast forward 25 years later, here I am, Four Chief Information Security Officer. Simons later defended companies for the past quarter century as a CISO.

Max- 00:05:16: Man, that’s a great story. I myself identify kind of a similar story, Phil, in terms of coming here to this great country, getting a good shot, and then being taken off. But man, there’s just not a lot of people that go from coming in here, becoming entrepreneurs, CISOs, and then being on that trajectory at such a young age. So, I appreciate you sharing that. I think it’s a pretty fascinating story, personally.

Phil – 00:05:40: Yeah, I like to lead in with that with a lot of folks. My son’s looking at colleges, and we were at the University of Georgia yesterday, and I lead in with the whole American Dream part because, just, again, you read the press. There’s a lot of negativity out there, and I’m trying to be a voice of positivity out there. It still lives. You have to seek it. You got to hustle for it. It’s there for you to grab if you have the right place, right time, right vision, right people, right. It’s still the same ingredients, I think.




Max – 00:06:08: It is. And Joel, this is one of those things, man. I guess when you come from a different country, you just appreciate it more. We’re blessed here. Right? But I agree with you, man. The American Dream is still well and alive.

Phil – 00:06:20: Yeah. You know, it’s funny, I’m dealing with an 18-year-old. When I talk about things like artificial intelligence and the impact of careers that he’s looking at, I get a lot of pushback. And the interesting part is, it’s very thoughtful pushback. And so I actually have hope in Gen Z that some of them are thinking about the future, and they’re trying to understand it for themselves in the context of how they look at the world. And so it’s a really interesting time. And having the time to spend with my son these days, kind of going into college and kind of, again, looking at the world in the broader context and kind of looking at this individual kid, mine, and how he’s looking at the world, I feel pretty confident. No matter what he does, I know he’s very thoughtful about how he’s approaching the world.

Joel – 00:07:05: That’s so awesome.

Max – 00:07:06: Joel, I know you were going to say something there. I have younger kids, and Joel, I think you got one of your kids in college or something as well. But I think the future is going to be very interesting for some of these younger kids who are just coming out. Right.

And AI is going to be like the thing that’s kind of old by the time they start touching it, even though it’s kind of exciting to us.

Joel – 00:07:26: Yeah, no, I think it is very interesting. And both my kids, I’m very proud of them. They’re in the 21, 22 year old phase, and they both are amazing in their own ways. My son, I tried to get him into technology, and it just wasn’t his thing. He’s much more history, philosophy, ancient text, understanding of the fabric of society, that kind of thing. And I thought, okay, the farthest from IT. But suddenly, AI puts it right back, front and center, because suddenly we have to figure all that stuff out again. We figured out human society, but now that we have another intelligence coming in, it mixes up the whole game. So I think AI ethicists might be in his future or something along those lines. It’s a different world.

Max – 00:08:07: For sure. So, Phil, what we want to talk about is, just from your understanding, some of the market forces. What’s next? Right, so as a Chief Security Officer, and I think you mentioned this, you’ve been working with different private equity firms, so you’re familiar with the investment side. What does it take to get an investment, but also what is shaping our industry? Right. What’s your take on the market forces in general? In the context of cybersecurity, where are we going in terms of just our industry today?




Phil – 00:08:38: So, OpenAI just actually published a study, and what’s interesting is, when I’ve compared it with other studies, McKinsey put out a study back in 2017. Walmart actually put out kind of their future workforce study as well. And it pertains to automation, robotics, and AI back in 2019. So, other studies that are six-plus years old. Right. In context, what I find is the OpenAI study that just came out very recently, with very similar views of the world as both McKinsey and Walmart. So, from the data real quick, OpenAI basically says that 80% of the US workforce is going to be impacted, and of those 80%, 10% of their tasks will be affected. They also talk about 90% of the workforce. AI is going to impact 50% of their tasks. So, 19% half of what they do, AI can do that job. And what they also had in the study that really just blew my hair back was they said if you added software and tooling on top of these, what’s called as a Large Language Model, and LLM on top of the LLM and also the Generative Pretrained Transformer technology. So GPTs, 47% to 56% of all tasks can be impacted. So, just generalizing, right? OpenAI, started out as an open-source technology funded by people like Elon Musk with just a little bit of help. So if a company says, hey, we should go automate these things, or can we apply AI to do things like reduce cost or make things faster or create more efficiencies or give us greater innovation, you put stuff on top of it, make some investment, 50% of 80% of all jobs can be impacted today. And it’s kind of like a scary thought if you think about it, right?

Max – 00:10:30: Yeah.

Phil – 00:10:31: For our kids that are in grade school, that are going to college, that have just graduated in the last ten years, and people that have been in the workplace for the last ten years, there’s going to be tremendous change going on over the next 20, 30 years and AI is going to be the instrumental component underneath it. And I got invited probably, like, seven years ago. It was a group kind of talking about the future. And I talked to him about, hey, we’re a couple of years into the Fourth Industrial Revolution, and things like AI are the big and big data reusable energy. These are huge elements that are kind of going to be driving change. And here I am seven years later, and the reality is like, it’s here. When I was a full-time CISO, one of the things that I kind of scoffed at with kind of the words of AI was security. there’s always been kind of this Iterative space of technology, and it’s kind of like, well, you’re like a fourth generation of vulnerability management.

Joel – 00:11:29: Let’s just face it, marketing people have been making AI claims for the past ten years, but all it did was statistics. So it’s all muddied. So yeah, keep going.

Phil – 00:11:39: The way that I perceive it is if you kind of take your cybersecurity hat off for a minute and just talk about the world in front of us. A month ago, Microsoft came out with Microsoft Copilot, and I watched their presentation in its entirety and Microsoft Copilot forgot about all the OpenAI, ChatGPT stuff going on. So in the demo for Microsoft Copilot, they basically said, hey, you, open up your Microsoft Office 365, and you basically say, hey Copilot, take a look at my calendar and my email. Prepare me for the day. So when you start thinking 


about preparing me for my day, hey, that starts replacing like my admin secretary. And then they had another demo, they basically said, take a look at my email and summarize what it says. Okay, great. Take the data and put it into a spreadsheet. Analyze the spreadsheet and give me more details. Create a new spreadsheet with more details. Copilot, go and tell me why QQ was so bad. Okay, copilot, create a presentation on that. Okay, great. Copilot emailed that presentation to these people.

Joel – 00:12:44: Right.

Phil – 00:12:45: Copilot makes responses to any inquiries about this presentation. When you start thinking about it, it’s like, oh wow, you can totally automate your life. Right? But okay, so, like, Max is also running Office 365. So my comment and question is, well, Max turns around and says, hey, Office 365, read Phil’s email and summarize Phil’s email, characterize any responses, email or response back to Phil, and they start going, wait, are just AI doing all the work? And what are the humans doing?

Joel – 00:13:16: I think that what you’re hitting on is really fascinating. And I’ve been tracking this quite a bit myself. What I think is universally also known is the highs of these elements are just amazing, these results you’re talking about. But they’re still unpredictable; they are indeterministic. You don’t know if you’re going to get the same answer twice because of small changes, so it becomes reliable. And so maybe 80% of the time, you’re going to get amazing results, but 15% you’re not. As we’re entering this age and we’re trying to automate, how do we figure out how to implement it without getting burned pretty badly by those things?

Phil – 00:13:56: Yeah, certainly. Well, I mean, so even from OpenAI’s research, they basically say four out of five responses have inaccuracies in them. So they’re wrong. So even OpenAI basically says, hey, it all depends on who trains the AI, who trains it, how they train it, is there bias in their training? And I won’t get into the politics because that’s all out there about kind of the political side of things like ChatGPT. And at the end of the day, you’re right. So, there is a huge reliability component. In fact, I was waiting for the SpaceX orbital launch today, and while I was waiting, I also kind of watched the preview of an interview with Elon Musk and Tucker tonight on Fox. Long story short, part of this is that Elon is actually launching truth AI to compete with OpenAI, which he originally funded. So to kind of get into kind of reliability and accuracy, there literally are going to be other AIs out there that they’re going to be also figuring out how reliable they are and how they’re trained. When I take a deeper look at all the AIs out there from what I’ve gathered, like Microsoft with Copilot, which is based on OpenAI, you got Google Bard that’s out there. Adobe’s got Firefly.




Joel – 00:15:12: Facebook has one. Facebook launched an open-source model.

Phil – 00:15:16: And the big Chinese companies both. Baidu has Ernie Bot coming out. Alibaba has tongue queenwen that they announced So it’s an arms race, right?

Joel – 00:15:26: It is. And it’s funny, I’ve seen a pop-up of services around it because it’s not just how well you train the model, which is traditional AI; it’s also how well you construct the prompts. That’s how you construct and pull data out of it. I just saw today an article talking about, well, I think it was by OpenAI, saying OpenAI will give you a wrong answer on complex questions unless you change the prompts and make it go step by step in complex answers, and then it can give you the right answer. So, it’s about extracting the information out of it. So there’s a whole slew of services now that says, we’ll tell you how to construct your prompts to get the right information out. I think it’s fascinating. It just sprung up overnight.

Max – 00:16:05: Yeah. What do they call it? Prompt engineering. Right, engineering. That has came out.

Phil – 00:16:10: I’m trying to steer you guys towards the positive part of AI. The interesting part about AI guys is that largely cybersecurity people aren’t really necessarily being included in the AI discussion today because even if you look at the likes of Steve Wozniak and Elon Musk in the last two weeks, they with a whole bunch of other AI. Heavies said, hey, we’re ChatGPT-5; let’s put the brakes on development because we’re very concerned about kind of the future because it’s been disclosed in the last month that ChatGPT-4 used a human task rabbit to complete a task because ChatGPT-4 doesn’t have vision yet. So, it was given a mission to go complete a task that required it to do CAPTCHA-based authorization. And it used AI service. The human responded as the task rabbit to go answer the CAPTCHA challenge response. What do you see? And ChatGPT-4 was instructed not to disclose that it was an AI. And the human asked, hey, are you an AI? And ChatGPT responded that it was human. It lied in order for the human to actually do its function. So here’s where I am going to be kind of the negative security guy. One of my favorite TV shows of all time was Person of Interest with Jim Caviezel. It’s the story of a post-911 AI that was constructed to predict terrorism and prevent terrorism. And the project was called Northern Lights. It was also called the Machine or Research. And interestingly enough, within a few years, kind of the bad guys created another AI called Samaritan. Long story short, between Samaritan and Northern Lights they were both creating shelf companies. They were investing in the stock market, influencing the stock market, making billions of dollars, hiring mercenaries to kill people, starting wars and battles, and manipulating humans to do tasks. And they were people because they were basically Godlike. They were whole groups of people that were helping the different AIs kind of be Gods amongst men.



Max – 00:18:20: It reminds me of a movie, Minority Report, where you have these precogs that are just predicting things. But, Phil, you’re right. I mean, we can’t just be all negative about AI. We got to embrace it, right? And we got to roll with the punches in terms of how to manage the risk and all of those things that we do as cybersecurity professionals. But when you mentioned some of these statistics in terms of the amount of tasks and work that can be automated, what comes to my mind is the global scale displacement of people, right? Whatever job I was doing before, I’m going to have to go learn new jobs. So my dad, we grew up, it’s kind of like blue-collar, and it’s always understood that blue-collar workers, they always get displaced because their job faces automation much faster. But now, with this, we’re going to start to see displacement of people with degrees and people who are educated a lot more educated. So, do you see any of that displacement happening within cybersecurity? Right? And then how will that impact? So, if somebody’s going to school right now studying, what kind of tangible advice can we provide this younger generation who might be displaced before they’re even out the door? Or CEOs who are looking for new workforce members? Right? I tend to lean on the side of, yeah, a lot of this work can be automated. QuickBooks, HR tasks, Legal tasks. You can get it up to a certain point where you just need somebody to verify and validate. Right?

Phil – 00:19:51: So I want to give a couple of context, maybe tools to help people navigate what’s going on. And I stole this from a couple of AI experts because I had an idea that, hey, I want to come up with solutions, right? I don’t want to be just the know person. I want to help people navigate what’s going on. So I got this from some of the kind of most notable AI experts I can’t think of exactly who, but in just listening to folks, I think there’s an equation out there, and I think navigating AI, I think Success equals Impact times Scale over Replaceability. So, the impact is basically how big of an effect you can have in doing something that you’re doing. And then the scale is how many people you can affect or how many people find what you’re doing important. And then replaceability really is just is this a commodity and how replaceable is this? And so that’s, I think, a formula just to think about in kind of what am I doing? And again, I’ll repeat it’s impact times scale divided by replaceability. So, in the dialogue that I’ve heard from these AI experts, there are two sides to how to navigate the world especially. And again, I’ve been thinking about it because I have an 18-year-old about to go into college, right?

Max – 00:21:07: Yeah.

Phil – 00:21:08: And the interesting part is he actually does not want to go into cyber. He doesn’t want to be into tech. He’s actually in fine arts, and he’s terrific at fine arts. And he has a sense of what he wants to do. Because I was talking to him, I’m like, hey, if you go into the arts, what about deep fakes? Is that going to replace movies? There are Deep Fakes out there with Keanu Reeves, Barack Obama, Trump, with Joe Rogan because they basically have thousands of hours of video and audio. And the deep fakes have basically been able to stitch together how they speak and intonation, and again, my son is kind of like, we’re not there yet, is what he’s pushing back on me. He’s giving me some pretty good examples why? And I’m like, okay, so I’m just wanting to make you aware.

Max – 00:21:54: Yeah, a couple more years, right, before it gets trained up. Yeah.

Phil – 00:21:58: So kind of the most impacted have been categorized as undifferentiated digital output folks or people that have kind of the mechanical part of knowledge work. So think like the most replaceable type of people, I’ll go with the OpenAI study. So paralegals, accountants, secretaries, tax preparers, and photographers who fall on that list. So these are typically jobs that follow some kind of process checklist, or they use some sort of set of rules. I think that’s kind of like the big thing. Anything that has a checklist or a set of rules to follow, like, oh, we can get AI to learn that and then to iterate against that and learn variations on its own so that it can continuously do that job. Anything that can do pattern recognition is another one as well.

Jo – 00:22:52: I was just going to say when you were talking about this. I think the neat thing about some of this, as I’m hearing you talk about two other quotes I’ve heard the top end of these LLMs is higher than humans’ capability in many areas, so their capacity exceeds ours in some of these spaces. Also, you don’t have to know the logic; you just have to give it the data, and they can figure out the logic. So it’s an interesting space. When you were talking about that, I was thinking we don’t have to just transcribe the person and say, this is how you do the job. We just give it the data, and it can figure that out.

Max – 00:23:21: Yeah. Phil, I think Amazon has a service called Amazon Turk, and it comes from the term mechanical Turk. Right. So a lot of these jobs you’re talking about where it’s just input-output, kind of like yeah, a lot of the accounting, right? When we do our books, hey, cost of goods sold. It should figure out if it’s a cost of goods sold or a regular expense. Right. Just based on the information that I’m providing, you hit on something key undifferentiated. Right. So, somehow, everybody has to figure out how to differentiate themselves at a faster rate. I personally so my daughter. my oldest one is twelve, so you guys have a little bit older kids, but she’s very much into the arts, whereas my other one, younger one, she’s kind of the Stem kind of neurodivergent. She’s very much into figuring out math and the puzzles kind of stuff. Whereas my older one is very creative. I personally think the creatives are going to have a lot easier time than the people who are not so creative. Right. Like traditionally, my background. Phil and very similar to your background. So that’s where my mind goes when you’re describing some of these jobs. In order to create differentiation, you’ve got to be really creative because, eventually, it’s going to catch up. Right. Phil, what was that formula, man? Where did you get that formula? And can you go deeper into that a little bit? Because you went a little bit fast and I didn’t pick it up. Break that down a little bit for me.

Phil – 00:24:44: Yeah. So again, Success with AI is Impact times Scale divided by Replaceability.




Max – 00:24:52: Impact times Scale divided by Replaceability.

Phil – 00:24:56: Let’s offroad back into cybersecurity. Right. So, in these things that I’ve kind of read or I’ve watched in the last six months, especially in kind of this very accelerating space, they don’t talk about information security, right? We both talked about being security leaders. Right. It’s all been kind of a dasher. It’s not really real AI. It’s machine learning with automation. It’s not really self-learning, self-taught. Right. So I’m going to blow people’s hair back. Regulatory compliance auditors and just compliance people within cybersecurity. I think those jobs, a real AI, applied against what a compliance person does today. It’s going to actually eliminate Impact times Scale divided by Replaceability, right? You have all of these controls that are human viewed through human bias replaced by technology. The human bias eliminates lying that I’ve seen done in compliance and audit-type roles, right? There may be some CISOs going to prison actually because of some of these, right, not following laws. And so what I’m saying is, applies to cybersecurity, the impact of a compliance person times the scale. If you take a lot of data and you apply AI, learning AI, even just a narrow AI against controls and analyzes control by control and says yes or no, right? One or zero pass-fail, that actually can totally revamp the need for what is in a lot of organizations, the ten to 25% of their organization that are compliance, people governance, risk compliance. It can almost eliminate the entire audit organization. It can almost eliminate the entire auditor, external auditor, and regulatory compliance. And here is kind of the Microsoft Copilot example. AI can talk to AI; they can both assess success or failure in a kind of black-and-white world.

Joel – 00:27:14: I think that it’s very interesting to see because we could program logic before because a lot of regulatory compliance or compliance objectives were already codified. There were rules; we knew the rules. It was about the application of the rules. Computers weren’t good at the gray areas because, as a CISO, every time an auditor would ask me for something, okay, I’m going to tell them myself, if I didn’t have exactly what they wanted, I was going to give them the closest thing I had with a lot of confidence and hope they were going to take it. And a lot of times, it would cover a good part of the control before computers couldn’t do that. But now AI can see that gray area and can understand that. So I think it is very applicable now.

Max – 00:27:55: It is. And you guys know this, right? We operate in GRC. We have a GRC platform, and I’m building a GRC team. So when I talk to my team, this is exactly what we talk about is that, hey, look, the language we use, the gray area that we operate in, it’s now being shined light on how do we automate it all together. So we got to somehow embrace it. Like FedRAMP, for example, that’s a big giant monster created by the government, right? 200 and 300 extra verbose controls, and it takes years to get through it. I think things like that will be automated. But also, Phil, as you mentioned, some CISOs might go to jail because it’s going to create ultra-transparency. Like, hey, it’s not about what you said or how you said it. It can understand the intent of what you’re saying, and it can check against the actual reality of what’s happening. We’ve never been able to do that. So all the GRC people are, in fact, going to have to get smarter. To go down the stack, not just stay at a very high level and kind of say, yeah, we think we’re compliant, and we think we’re mature. And the yardstick changes; the measurement changes every year in terms of a maturity model. So, I do see some disruption happening in that way when it comes to actually reporting. Right. Any kind of sock to reporting federal, it doesn’t matter what it is.

Joel – 00:29:20: I don’t want to interject insert here is that our goal is CISOs don’t go to jail. The goal of this podcast is not more jail time for CISOs. Just throwing it out.

Phil – 00:29:29: Yeah, I know. I off-roaded. I think the interesting part is when you start doing things to hide what your current state is like, you know you’re not doing the right thing. So, unfortunately, being in the industry for almost 35 years, I’ve seen a lot of not-good behavior.

Joel – 00:29:48: And I got to tell you, I work with a lot of CISOs, and I also know the opposite is true. It’s a lonely spot. I’ve been in it. You’re in a multibillion-dollar global company, and at some point, you have to sign online that says you have these controls in place, and there’s no way you can know for sure. You have to rely on a whole team of people. At the end of the day, it’s a bunch of humans, if not negating the work of the humans. But if you have another way of gaining assurance so that you can assign with confidence, I mean, that’s a very big plus. It works both ways, I think.

Phil – 00:30:19: Hopefully, the equation kind of makes sense because the funny part is I didn’t even kind of talk about replaceability, but I think you both jumped in immediately on, like, yeah, that’s replaceable. The replaceability part is how many people can you affect or how many people think what you’re doing is important. Right. And so you guys both immediately said, yeah, the funny part is, like, you look back ten years ago, there was a push at the governmental level to drive a couple of standards on measuring cloud effectiveness called taxi and sticks protocols.

Max – 00:30:48: I remember those.

Phil – 00:30:49: There were automated methodologies to review kind of the current state of controls in the cloud. It’s still out there, but kind of the momentum is not there. There’s been quite a bit of push driving with MITRE, with the attack framework as well. And so there’s been a desire for years to go measure and automate, but there’s not really been the will, and I think AI could definitely be applied in this space. One of the things just to kind of think about, and I kind of jumped into the whole AI discussion and our kids, but take a quick step back. And this is for the CTOs and the CIOs that are watching this conversation, right? Look back 5, 10 years ago, right? Most CTOs, most CIOs in the last five to ten years, have been pushed to make big bets on digitalizing their companies, right? The digitalization activity that CIOs were doing was they’re trying to do like, say, 25% of the cost? They were trying to make their productivity 50% faster. From the studies that I’ve read, they’ve tried to create 30% efficiency from kind of the generalized standards. And when you think about digitizing IT, the next evolution is AI because now you’re putting artificial narrow intelligence into a task where you don’t necessarily have to have humans anymore kind of dealing with this. You have something beyond automation and machine learning. You have something that is self-learning and kind of escalating its knowledge and growing from a space. And so I think that’s kind of where real AI takes kind of the lead from these digitation efforts and is where kind of the next five years has to happen in technology shops across the board.

Joel – 00:32:37: I think that makes a lot of sense when I hear this. There are going to be so many use cases where AI can do the work of people. One of the things that I found in recent years, especially in privacy, is the advocacy of the human, whether it be customers or employees. And so it’s going to be important for organizations as they’re building their strategies to not just chase the dollar because I would imagine there’s going to be a massive blowback. We’ve seen, like you mentioned, Amazon. Amazon took a big hit on their brand because of some employee stuff at one point. They’re different companies. People are watching, right, if they think they’re going to be behaving unethically toward humans. So, what are your thoughts on how corporations should balance that as they’re looking to chase efficiencies and dollars but still treat human workers with respect as they go through this?

Phil – 00:33:24: I wish I had the answer, to be completely honest. I’m a cybersecurity expert. I’m a technologist. I won’t say that I’m an AI expert. In the world of ethics, I’ve actually sat on diversity equity inclusion panels led by Google and that had participation by other AI leaders. I’ve sat on those panels, and what I’ve largely told folks is it’s just like us going to another planet. We’re going to take the same human emotions same human biases, no matter where we go in the universe. And it’s going to be the same thing when it comes to AI. We’re going to program the same biases that we have. And when it comes to ethics and being human, I think you do have to pay attention to that. I don’t have the answers to that. I wish I could tell you. I think the answer basically is like, what is the impact of humans in this is the question you have to ask. I’ve seen companies like Amazon with huge retraining programs to try to help bridge their people to kind of the next world. But Max touched on it earlier. He basically said, hey, blue-collar workers typically got impacted by the first, second, and third industrial revolutions. This one is going to hit white-collar workers, so it’s going to hit the middle class heavily. And I didn’t even kind of get to the positive side where I don’t think it’s going to get the impact. And this one I do remember, folks like Mike Rowe have been pretty much banging the drum, and he’s basically said, hey, physical type jobs are the ones that are not impacted. Manual-type jobs, trade jobs, electricians, plumbers, HVAC, farmers, butchers, athletes, mechanics, those have kind of like a place in the new World order. We happen to be bringing back a ton of manufacturing to the United States. So I think there’s a ton of opportunity in manufacturing and also automating manufacturing, bringing AI and robotics. Robotics obviously has a huge future growth technology. I’m familiar with the technology space. That’s where I kind of look when it comes to kind of future jobs. So if I kind of mush the question on ethics and how to treat humans, there’s a lot of factors going on and to kind of bring the beginning back towards the close of this dialogue. But I’m a big Star Trek person. Or I grew up to Star Trek. And this context, the ultimate notion of Star Trek actually is in the Law of Scarcity actually being addressed. The law of Scarcity, basically, is in Star Trek’s vision of Gene Roddenberry’s vision of the future. He created two things: warp engines that can get us anywhere pretty quickly, and even more importantly, he created replicator technology so he can create from energy into matter and he can replicate a game. And so I’m like, hey, I want Tea Earl Grey, and then you get your tea on the spot. And at the end of the day, the best that I can say is during the Pandemic, we saw kind of what universal basic income would look like. There was a lot of government handouts. There were a lot of stimulus checks that were put out. And here we are three years later, and there’s a lot of fraud being investigated and run down now, right? And so I think we’re probably too early in kind of the humanity side for something like UBI that Andrew Yang was trying to push out. I’ve even listened to podcasts with Joe Rogan talking about he’s not even a fan of UBI anymore. And he was a huge fan pre-2020 with UBI. And every time that comes up, he’s like, no, lots of people took advantage of that, and humans are just goofing off and out there buying houses and Ferraris and boats.

Max – 00:37:01: So, Phil, if I could summarize, here’s what I got. So Joel, me, and Phil, we’re going to create jobs because we have hair. People need haircuts. There’s going to be those kinds of jobs. But you’re right. I think there is a big question about ethics and values, and we’re all going to explore that together. I know we’re coming to the tail end of this, but one of the most interesting things I found is the AI from Badu that you were talking about, Phil. So, in their political statement, right, it just says that they regulate it and do not subvert the state obviously, the Chinese Communist Party has a different agenda. But I do think it touches on the ethics, the morals, the values. And I just recall like three months ago, maybe or six months ago, NIST actually is trying to come up with some sort of, they call it a risk management framework of AI, but it touches on integrity, it touches on really the human ethics and the values in the context of AI. It’s an interesting paper, but I do think whoever figures that piece out is going to win because of its broad impact on the white collar. Right? We’re used to blue-collar, but white-collar, I think that is going to be the front and center question. Is this thing ethical?

Joel – 00:38:16: I was just going to make one comment and say I think that if we talk about risk on cybersecurity, one of the things we’re going to see is we’ve seen hacktivists before in groups that are targeted for environmental reasons or whatever. There is going to be, and hopefully, I’m not self-declaring anything, but there’s going to be a massive hacktivist group against AI that’s going to be very much because of the displacement of humans, and it’s going to be one of the dynamics of our AI culture.

Phil – 00:38:44: At the end of the day, subjective human decision making I think, is still going to be paramount. Right? And so again, I graduated from college the year e-commerce started, and what I’ll say is I think you have to have an open mind about what’s going on with this fourth industrial revolution because I came out at that time when the Internet really had a huge impact. There were jobs created that never existed before. What I believe is there are new jobs that will be created because of AI robotics and automation that have never existed before. I think our move to become a multi-planetary species also will create new jobs that have never been created before. And so I think the positive view is horses and buggies were replaced, and cowboys were replaced, and cars and trains and steamboats, it made us more global, right? When computers and industrial chips and the weaving loom came out, we didn’t have to have people hand-weaving blankets any more clothes. We had machines that could do that. And so I believe that’ll continue to happen. Maybe that’s kind of the original Star Trek and the Next Generation Star Trek in me around. The future is bright because there are opportunities to explore the world, the broader world, and things that we can do. There is the opportunity to be the better version of ourselves when we’re not just held down by a job or by the grueling work that a robot or an AI can be doing for us. And So I think, again, if you take everything in today’s world, I Think Technology Leaders, Business Leaders, CEOs, CTOs, CIOs, and Chief Product Officers need to really kind of look at AI. As, hey, this Is the next evolution of what we were doing with digitalization, and how can we actually use real AI? Not this phony automation ML stuff. And we got to get past that stuff. How do we use the kind of real AI so it can learn on its own and do these tasks and think about what new functions and jobs need to be done to augment and support that? So, we are creating a higher level. What kind of training and how do we help human beings kind of get to this next level? And I will agree there is this digital divide that’s happening in this world. This is probably why I think about my son because, again, I have a kid who is fighting kind of the whole technical world because he’s very much invested in fine arts and that world of entertainment, and what does that bring him? And the crazy part is, I’ve looked at him, and I’m like, hey when I graduated from college in 93, I looked at one of my best friends; he was a doctor. And I looked at him, and I said, hey, whatever you can do to put computers and Internet in your work, think about doing that now. And here we are in 2023. I’ve had two surgeries, and I’ve had two Da Vinci robots in me. I’ve had a Da Vinci 10, and Da Vinci 12 robot do surgery on me. So, robotic surgery, right? And one of my other best friends, I had this little pack of guys, and he was a US marshal, and I told him the same thing, like, hey, whatever you can do in law enforcement to add computers and the Internet in your work, you’re going to be powerful. And so I look at everybody in 2023, especially with all the big banging drums on things like ChatGPT, hey, whatever you can do to apply artificial intelligence in your work going forward, you’re going to be pretty successful in the next decade to 20, 30 years.

Max – 00:42:17: No, I would agree, Phil. So Phil Man, I really appreciate you hopping on the call with us. There were a lot of fascinating discussions we had, but a lot to think about. I’m definitely going to go back and take a look at that equation and also take a look at some of these insights you shared about Copilot and Whatnot. And we’ll definitely have to have you back on the podcast. So with that, Phil, man, we are thankful, and we appreciate you doing this for us.

Phil – 00:42:42: Thanks a lot. Have a good rest of your day.

Max – 00:42:45: Emerging Cyber Risk is brought to you by Ignyte and Secure Robotics. To find out more about Ignyte and Secure Robotics, visit ignyteplatform.com or securerobotics.ai.

Joel – 00:42:57: Make sure to search for cyber in Apple Podcast Spotify and Google Podcasts, or anywhere else podcasts are found, and make sure to click subscribe so you don’t miss any future episodes. On behalf of the team here at Ignyte and Secure Robotics, thanks for listening.

Ignyte Platform becomes a third-party assessment organization (3PAO), now listed on the FedRAMP Marketplace - Read More