Emerging Cybersecurity Risks

Joel’s Book Review (Secure Intelligent Machines)

SHARE EPISODE

On this episode of the Emerging Cyber Risk podcast, we cover the recent meeting that President Biden had with some of the top AI cybersecurity leaders in the industry. The podcast is brought to you by Ignyte and Secure Robotics, where we share our expertise on cyber risk and AI to help you prepare for the risk management of emerging technologies. We are your hosts, Max Aulakh and Joel Yonts. 

This podcast episode features Max Aualkh and Joel Yonts talk about Joel’s book, which explores the security aspects of AI and how to build a cyber protection program for it. They highlight the lack of literature on this topic and explain why they felt the need to fill this gap. The hosts emphasize the importance of finding trustworthy sources of information in the midst of the noise surrounding AI and cybersecurity.

The touchpoints of our discussion include:

  • The purpose for Secure Intelligent Machines 
  • Nothing written about how to “secure” AI?
  • Why is Joel qualified to write?
  • When did Joel start writing this book?
  • Who is this book written for?

 

Get to Know Your Hosts:

Max Aulakh Bio:

Max is the CEO of Ignyte Assurance Platform and a data security and compliance leader delivering DoD-tested security strategies and compliance that safeguard mission-critical IT operations. He has trained and excelled while working for the United States Air Force. He maintained and tested the InfoSec and ComSec functions of network hardware, software, and IT infrastructure for global unclassified and classified networks.

Max Aulakh on LinkedIn

Ignyte Assurance Platform Website

Joel Yonts Bio:

Joel is CEO & Research Scientist at Secure Robotics and the Chief Research Officer & Strategist at Malicious Streams. Joel is a security strategist, innovator, advisor, and seasoned security executive with a passion for information security research. He has over twenty-five years of diverse information technology experience with an emphasis on cyber security. Joel is also an accomplished speaker, writer, and software developer with research interests in enterprise security, digital forensics, artificial intelligence, and robotic & IoT systems.

Joel Yonts on LinkedIn

Secure Robotics Website

Malicious Streams Website

 

Resources:

Secure Intelligent Machines

Max Aulakh 00:03 – 00:17: Welcome to Emerging Cyber Risk, a podcast by Ignyte and Secure Robotics. We share our expertise on cyber risk and artificial intelligence to help you prepare for risk management of emerging technologies. We’re your hosts, Max Aulakh.

 

Joel Yonts 00:17 – 00:25: And Joel Yonts. Join us as we dive into the development of AI, evolution in cybersecurity, and other topics driving change in the cyber risk outlook.

 

Max Aulakh 00:26 – 00:43: All right. Welcome everyone to this exciting episode of Emerging Cyber Risk. So today we’re going to dive into something really exciting. My friend Joel, you guys are familiar with, has been writing a book. Joel, how are you doing today? How’s your day going? 

 

Joel Yonts 00:43 – 00:46: Going well, going well.  Finishing up a week. A lot of interesting stuff happening.

 

Max Aulakh 00:46 – 02:26: That’s awesome. Well, I know that, Joel, you have spent so much time on writing this book. And Joel, when we first met, I don’t think ChatGPT was out at that time. But now, what I’m seeing is that there’s a sea of noise. There’s so many different people writing different things. But I know you as a friend, and you had been writing about this way before OpenAI made their announcement. And the question is, the big question is, how do you find something useful when there’s a lot of noise? And who are the people that you can trust, you can go to? And then, of course, the C-suite leaders that are in charge of the mission, they’re starting to basically take on AI, and they might be struggling on how to plug security into AI, or vice versa. So this book that Joel has written, I love the practical nature of this book, how a cyber working professional might go about starting a program. This is one of my favorite chapters. I’m not going to go into it too much, But before we dive in, my favorite part of this book, for those of you who are listening, this book starts out by dedicating it to a higher power. It dedicates it to God, and I think that’s going to be incredibly important, because who is going to be the arbiter of truth? who is going to distill what is ethical, what is not. So stick around this episode. I had to be brave enough to suck at learning something new here. So with that, Joel, man, before we go into the book, tell me a little bit about why did you decide to write this book, and what is the book called, and why did you decide to go on this arduous journey to write it?

 

Joel Yonts 02:27 – 03:26: Certainly. So thank you for this opportunity. I love talking about this topic, and oh, I love any topic that you and I discuss, so this is double good for me. So when I started looking at this space, I knew AI was coming fast, and I could see it coming. I was building up some capabilities and research in this space. But what I noticed was all the books were about either enablement of AI technology or the use of AI. Or finally, if cybersecurity and AI was used in the same sentence, it was about how to use AI for cybersecurity. Nobody was writing about what does security of AI look like. When I started doing a search from books, articles, and whatever, there was no complete article manifest, there was no source to go to for this information, and it largely hasn’t been developed. So that really triggered me to say, look, this is a big gap, and it’s going to be an important gap to be filled pretty quickly as we are mass-adopting this now.

 

Max Aulakh 03:26 – 03:42: So are you saying, Joel, that maybe there were material, but it was more academic in nature? When you say there was nothing out there, do you mean that for, like, cyber security community or artificial intelligence community? What do you mean there wasn’t much information out there? Certainly.

 

Joel Yonts 03:42 – 04:49: Well, certainly in the AI and data security space, there was tons of information and a flood coming out because of all the, especially over the last 18, 24 months, so forth. But specifically, when you look up cybersecurity and AI, even in the academic realms, there wasn’t people talking about it. And there was maybe a notion that, well, is it any different than any other cybersecurity? Maybe there was a lack of understanding there, but it really wasn’t. The most I could find at the time was a little bit of information around theoretical data poisoning and a few things. MITRE and NIST have started down the path of doing some work, putting together some frameworks, But, and I love the products they put out, but if I’m being quite frank, those are incomplete. They did not cover the entire landscape. And as nature of those products, they put them out in the public and then they iterate through it. So it’s no ding on them, but it was just underdeveloped. So when I mean nothing, as far as how to build a cyber protection of, or a program to protect it, there was literally nothing I could lay my hands on, academic, practical, or otherwise.

 

Max Aulakh 04:49 – 05:21: Yeah, that makes sense because whenever MITRE or NIST takes on a project, there’s usually a very good scoping exercise they go to. Like, what is it that we’re trying to accomplish? So I can imagine they wouldn’t necessarily want to build something that covers a cyber program around AI. So Joel, when did you start writing this book? Because it’s like 300 plus pages, man. So I know it took you a while. When did you start the journey to start authoring? When did you decide like, yeah, I’m going to do this, and you started on this?

 

Joel Yonts 05:23 – 06:03: Interesting. This has been developing in my mind for multiple years. So it’s about 400 pages. I remember every single one of them, it feels like. But when I actually sat down to start pen to paper was about just a little over a year ago. And it was one of those things where there was so much to say that they just started flowing. It was a pretty exciting thing, because my research focus had been in this space for, like I said, a number of years. So there’s a little bit of pent-up that’s ready to roll. And actually, I had to cut a large section of it and put it in the next book that I’m writing, just because there was—well, I needed this one to come out sometime in the future.

 

Max Aulakh 06:03 – 06:28: So there’s another one that’s going to come out, and it just seemed like, man, the timing was just right, right? Because we knew there was a formation of a company called OpenAI, like there’s a lot of other companies, but we didn’t know that they were going to be releasing their chat GPT, and all of a sudden it’s become the most relevant topic to talk about. So, man, the timing on this book is perfect.

 

Joel Yonts 06:28 – 06:57: Well, I will tell you, I did have to go back and rewrite part of chapter one, because I started that in, I guess it was summer of 2022. And in that chapter, I was trying to build the case for why you needed to build a cyber program for AI. And there was a lot of good information, but it was a little bit of convincing. And what I found was by the, when March of this year rolled around, I had to go strip that out because it was no longer, and no one needed to be convinced anymore that it was here.

 

Max Aulakh 06:58 – 08:02: Yeah, that’s right. That’s right. You know, we, as security professionals, we always have to build the business case for doing something, right? Normal security or security around whatever the issue might be. I remember software like 10, 15 years ago, we have to inject it into SDLC. But yeah, chat, GPT, and all of the different news out there, they made the case for us. So I could imagine you had to go back and edit. But I did like chapter one, right, because it talks about the fundamentals of building a program from a chief’s perspective, a security leader’s perspective. One of the things that I was curious about, we’ve been doing security for a long time, but what is the difference? If you had to name a few big different things about a regular cybersecurity program and something that encapsulates artificial intelligence, what are some of the big differences, you know, that a cybersecurity professional wouldn’t necessarily know about because those leaders have been, you know, doing security in a traditional way for a very long time.

 

Joel Yonts 08:02 – 09:44: Certainly. I think one of the core and the fundamental differences is in this space, it’s not like just another application. In order for AI to get its value. It needs to be able to see patterns humans can’t see or operate with a level of autonomy without direct human supervision. And so those things in itself run contrary to things like compliance and security right off the bat. Those are gaps. And the other thing is you don’t necessarily program the logic in AI. You program the logic to contain the information, like the containers, but ultimately you feed data and the machine builds its own correlations. There’s no if-then logic that’s entered into this system. It learns those patterns itself, and it is very difficult for us to see what it has learned. If it’s a simple model, yes, but otherwise largely complex models. There are complex knowledge patterns, like if we talk about large language models, when they train them, there are knowledge patterns in there that they actually go back and probe and try to understand what exactly does it know. And there’s been a lot of surprises after the fact. So I could keep going, but it’s a couple of those concepts that We’re no longer the ones programming these machines. We’re setting the conditions for it to learn, and then it learns the logic itself and we apply it, which creates a lot of interesting situations, as I talk throughout the book, about how security best practices and compliance best practices have to be adapted to protect those or to comply with regulatory standards, for example.

 

Max Aulakh 09:45 – 10:38: Yeah, Joel, I guess let me ask the question in a different way because, you know, if I’m looking at this as a CISO, and all of a sudden a company has to deal with this issue of artificial intelligence, and I’m used to running a security program with the traditional things, what are, I guess, you know, you mentioned that, you know, these systems learn on their own. Right? What are some of the things that are going to be radically different than my old, my traditional security program that I may not be aware of, right? As I’m trying to set up the governance for, you know, for this dawn of new era, as you call it, right? I’m trying to set up the governance for something brand new, and all I know is my traditional programs of GRC, security operations with sub-functions underneath it. As a chief, what are some of the deltas that I might be blindsided by? Certainly.

 

Joel Yonts 10:38 – 13:28: Well, one of the big ones right off the bat, and kind of pulling on that thread of the machine learns, and there’s other domains other than machine learning, I should say, but that’s one of the bigger ones. And in the book, I run through the various domains and the nuances of those, but data becomes a much different animal. Because if you think about when you’re developing a product today, there’s a lot of requirements that are done up front and design and you build the logic and you build proof of concept. Maybe you bring in samples of data to test against it and then eventually you get to where you’re applying the full data. But humans have designed and there’s peer reviews of all the algorithms and logic that goes into these programs. Well, in the case of AI, data, think about it like a river, right? Where data flows inside your company. If you think about your large data sets that you’re going to use to train these models, maybe it’s a product model or a customer model. There are pockets and tributaries of data that flows from these small trickles, small applications that accumulate in data warehouses until it eventually accumulates into a large enough data sets to train these models. Well, any of those points along the way introduce opportunities for, say, a malicious actor to inject data into those systems, and the data will actually become the logic in these applications. And sometimes it’s really difficult to know if, for example, a poisoned pattern has been learned in the system. And that’s just one of them. The other thing is, once this knowledge gets in a system, It’s generalized and you think it’s protected, but there’s a lot of attacks. Say you train a model on sensitive data. Well, there’s a lot of attacks now developing to where an attacker with access, even administrative query access, not administration, just be able to query that model, can reconstitute original data. So there could be loss of data that’s pulled out of these models. And you think it’s abstracted, but it’s not. And so it makes that whole data governance from the beginning to the end much more complicated. And those attacks I was talking about, their model inference and model invasion attacks, where I can go and determine this particular PI was part of this data set and cause a data breach. The data custodians and the data governance models inside companies, don’t have that accounted for largely. And matter of fact, there’s not an awareness. And so building awareness about the impacts of the data chain is a pretty substantial one. And that’s one piece of it. I guess the other one I will throw out is the development process is much different. It is how you develop these models does not match our SDLC at all. And I’ll pause right there. Is that kind of in line?

 

Max Aulakh 13:28 – 14:49: No, man, I think that makes sense because, you know, when we look at the data itself, we, in traditional way, think about a data classification model. And you’re classifying all these little data elements, but when you’re talking inference, And you can infer off of the data about the data, not the actual sensitive topic. I think that’s a big difference in even how we approach data classification itself. We can’t approach it. And that’s really where my question was getting at is, As a chief security officer, you may be thinking about your normal program from a traditional military perspective, that’s where I come from, on how to classify information. And you just put your tags on it, you put your stickers on it, but with inference kind of attacks, it seems like we got to think differently altogether. And so that’s very helpful. The other thing, Joe, that I found interesting is in one of the chapters, you actually took the time to define what intelligence even means, right? And the chapter is called Defining Intelligence. Do you find that in our industry, we’re still struggling with definitions and terminology? Why did you include that, right? Like what was the impetus of including? I love that you did the defining of what intelligence means, but Why did you feel like the need to actually include that sort of chapter in there?

 

Joel Yonts 14:49 – 16:21: Well, I think that it starts with the fact that vendor products have claimed artificial intelligence for two decades now. And so, you know, and it was little more than statistics. And so, one, it’s been around, but when we think about securing something, I mean, you’ve got to know what it is. And a lot of times it gets really confusing. In my conversations, a lot of people will equate, you know, AI with ML or AI with large language models or whatever. But just first starting with what is it really? How do we define this is pretty important. And once you start defining what it is, then you can flow into what are the domains And it builds a hierarchy of definitions that gives you a taxonomy to go at this problem space. Because right now, I feel like security leaders are just getting steamrolled by product names, and there’s no way to classify it. What’s the difference in Dolly versus some automated decisioning system versus ChatGPT, right? I mean, there’s these costs of flood, and I think being able to rightly understand you know, what is it we’re dealing with and classify it a little bit. And then underneath that, as I work in the book, each of the domains, there’s like six or seven primary domains of AI, each have their unique attack vectors, attack surface, and defensive strategies that you should go after about those. And it gives you a way to go after it in a non-whack-a-mo approach. So that’s the reason that I thought it was very important to put in the book.

 

Max Aulakh 16:22 – 17:38: I loved it because if you can’t really define the problem, good luck solving it. And I think right now, and even I’m guilty of it, we tend to use AI, ML, and whatever else we can throw at it, almost like a synonym, interchangeably. And what you’re really defining in this chapter is, no, these are distinct topics within this broad category. And to your point, a taxonomy of how to classify it, how to control it, and whatnot. And then also you get into technical specifications, you know, because in cybersecurity, we not only have to write the rules, but we have to know what the rules actually mean at a technical depth level. So I love that particular area. And then the other area that I really liked is, this is classical, man. You don’t know what you’re protecting unless you inventory it, right? So the whole asset management. And when I look at AI, what is an asset? I think of it as a property, like a piece of property that I own. What is that in context of artificial intelligence? Where does it start and end? I know you’ve got taxonomies and stuff, but in your mind, how flexible is is that taxonomy.

 

Joel Yonts 18:00 – 20:15: is that when you start thinking about the types of the domains of AI, what are the discrete objects, these are software objects for the most part, that constitute the AI model? Because at the end of the day, each of those will contribute to the overall risk picture, as well as what preventative controls you need to put in place. And it has attributes that talks about that you should capture what model, what domain you’re dealing with, what models are involved in putting this in place. And a lot of times, a complex application may have multiple models, and there’s different model algorithms. And then in labeling all of those, as well as the data assets that fed into those and data classification that goes into those, all those, when you start thinking about and peeling back the onion a little bit, A lot of what I cover in the development chapters and the operational chapters, there’s details about how AI is developed. Each of those are very important. A couple examples that I will throw out is that if you talk about training a machine learning model, I keep going back to ML because it’s probably the most well-known, but there’s other domains as well. You can train a model either in something called batch learning or online learning. And so the difference being, I have this large collection of data and I feed it in one time and train the model and then use the model, whereas online has a continual update model associated with it. Well, the natural security implication to that is, if I’m doing online learning, then there’s a way to continually train that model, which means an attacker, with the appropriate access, could poison that model much more readily than the one that has batch. Or, for example, if you have a model that’s doing batch training and you know you train once a month, then turn off the training interface to that model and lock it down that if any training happens outside of that window, it’s an alert. It’s that being able to apply that is pretty important, as well as every model has strengths and weaknesses both in operations but also a cybersecurity model. Like some models are much more susceptible to evasion or much more susceptible to data poisoning. Understanding where they’re at is probably important for the program.

 

Max Aulakh 20:15 – 21:54: Yeah, I don’t want to discount it, Joel, because I sit back and think about, you know, if somebody was given the mission to enforce a security program within context of anything, they’re first going to be like, what is it that I’m actually protecting? So as boring as inventorying is, I don’t see a way around it. It’s a fundamental topic, so I’m so happy that you included this. But the other thing is, most people are going to lean into Microsoft and OpenAI and Azure. And so, classifying what is part of our AI scheme, half of it is our data, half of it is the information we get back from OpenAI, and then the other could be just this entire operation. The result and the outcome of it. So I like it that you actually took the time to create an attribute, the actual attribute type. It reminded me of essentially how to build a CMDB for classifying what is it that you’re supposed to be protecting. So as a leader, you can at least start on with some tangible stuff that you’re supposed to be protecting. So I loved it, man. I loved that particular chapter. My other question was around creating intelligence. So Joel, I’m just going to ask you bluntly, why did you write this chapter? Because to me, it seems like people are already creating artificial intelligence. So what was the impetus for writing this particular chapter? Why did you feel like this was an important chapter?

 

Joel Yonts 21:55 – 24:57: Certainly. And certainly people are creating intelligence and writing these models at a blinding pace. But guess who’s not at the table? Security. And so, and partly we’ve got this SDLC model that sits out here and talks about traditional development, and we might feel good about that, but how does it apply? And a lot of people don’t, doesn’t even know where it breaks down. And so I felt it very important at this point, the data scientists that read this book, because I think data scientists will find this book interesting, as well as the cybersecurity practitioner, I should say that. And some of the cybersecurity components of it, obviously, the data scientists would look at those as fresh and new, that creating intelligence is not going to be new information to the data scientists. This is what they do day in and day out. It really is going to inform the cybersecurity practitioners, and the leadership. But I will just go one example. If we talk about, again, data, there is a step that is in the development of a model that doesn’t exist in traditional development, for the most part. And it’s what I termed data exploration. In order to build a model, there’s no hard science that says, oh, I’ve got this data, I’m going to need this model, and it’s going to have these parameters. Rather, what you do is you take a data set, and then you massage it, and you do a lot of different things to the data elements in it. For example, you may convert text to categorical data. You may normalize data in certain ranges. All these things affect model performance. But then you may try different model algorithms to find out which one’s going to perform better. And there’s so many different hyperparameters. And it’s really part science and part, like, art, if you will. But that process of exploration is fundamental and critical, but it doesn’t exist within the traditional SDLC. It really doesn’t. But what does that mean from a cybersecurity perspective? Unless you develop a dedicated space for this to happen in a server environment, guess where it’s going to happen? On your developer’s workstation. Okay, so the developer is going to copy down what’s potentially very sensitive data sets onto their workstation and they’re going to create 15 copies of it because each copy may be slightly different and it creates data proliferation, data that moves out of sensitive areas into a less sensitive area. And that can create some specific issues with data retention, data loss, and there’s a lot of other things that go along with it. I think one of the things I find is when I articulate this to security leaders and security practitioners, there’s a lot of moments that goes off. And there’s steps we should be taking to help protect that and adapt our model. But unless we understood how this process worked, or the difference in training versus validation versus tests, we wouldn’t understand the security ramifications of, you know, some of the controls we need to put in place, I was saying.

 

Max Aulakh 24:57 – 25:16: Yeah, that makes sense. But Joel, did you, when I read this book, and I’m still getting through some parts of it, it felt like you wrote it for security leaders first. What was your intent in terms of the audience for the book? You also mentioned data scientists, but, you know, who did you write this particular book for? What did you have in mind?

 

Joel Yonts 25:17 – 27:04: I think that, in my mind, there was a core group that is going to be the most impacted. But I think there’s going to be lots of different roles that will find this useful and interesting. The core would be those people that are responsible for application security, cyber security. The book opens with a fictitious story about a CISO on the hook from the board of directors asking, are we secure from an AI situation? And the CISO is left with, how do I answer this question? So as a CISO myself, long term. I know how those feelings can be. So it’s around empowering organizations to build a proactive program, to get ahead of this, to have confidence that they’re delivering it. So it would be that core group. However, and you know, when I’m talking to data scientists, they find it very interesting because, you know, we’ve dealt with engineers and developers before. They think about the positive use cases. They don’t think about necessarily about what’s the negative use cases. Yeah, exactly. I love it. The abuse cases, they don’t. And when I talk about some of the things, there was a section on poisoning the data pipeline through automation that as I was talking to some data scientists recently with it, they were like, wow, that would really work. And I guess that’s the other thing is I put a lot of proof of concept attacks in there that hasn’t been articulated anywhere else. because it’s the realm of what’s possible. And so I think data scientists would find it interesting. I think CIOs and CTOs, certainly, because they’re being asked, what about AI? So a large number of technology people would find it interesting, data scientists, cybersecurity. And the potential, there’s parts of it from a legal professional and some chief risk officers as well.


Max Aulakh 27:05 – 28:48: Yeah, I think Joel, you know, we’re all in this, I started this off, you know, by saying, I’m going to be brave enough to suck at something new, right? That really applies to a lot of different roles, because this chapter, when I questioned, hey, why was this even written? Because cybersecurity professionals are not creating intelligence themselves, but it’s really hard to have a relationship with the development team or the machine learning team, whatever team that’s actually doing this work, without really understanding it from their perspective. So I appreciate it from that side. And then also, you’re absolutely right. I think we’re the consumers of AI. We’re watching it, we’re like enamored by it, but we’re not really partaking. We’re not really contributing. And this is fantastic that now there’s actually a source, right? I think this is probably the only book out there in the market that talks about it from a chief security officer and their team and what they got to deal with from their perspective. in terms of building a program. So I loved the starting of the book, and then even though you called your own chapter boring, I actually liked the asset management chapter, and I thought that was good. But the future, the future of risk, right? So you mentioned, hey, I’m already, man, I had no idea you were working on the second part of this book, but what’s next, right? What’s kind of the over the hood, the future forward looking areas of this book that are interesting? Or maybe another way to look at it, since you’ve already published this, are there things that you’re like, man, that future has already happened, right? It’s already here.

 

Joel Yonts 28:48 – 29:33: Certainly. Yeah. I think that’s really good. You know, when I look at near term, I mean, one of the things that I was talking with someone earlier and we were discussing about CISO involvement, AI decisions. And the reality is, and I’m saying this as a CISO, and I’m going to broad generalization here. I know there’s going to be unique situations out there, but we’re in a situation where the value proposition of AI is so high, the CISOs don’t have an equal seat at the table. that I don’t think they will at this point. And I’ve seen it over and over again. And it’s not going to be, what do you think we should do with AI? The question that’s going to come back to the CISO is, do you see a reason why I should not sign up for all the benefits and all the new functionality?

 

Max Aulakh 29:33 – 29:39: And, you know, and the difference in the war for or the other thing is, hey, I already did this. Can you now take it?

 

Joel Yonts 29:39 – 31:19: Exactly. Exactly. So, I mean, it’s it’s like the only way that you’re going to be able to say no is if you point to something that’s specific. And there’s examples of that, sure, but it’s coming is the point. And so I think that what we will see very quickly over this next year to 18 months is the continued mass adoption because it’s leaking in. Even if we don’t want to, if we want to say, I’m going to tap the brace in my organization, every new product update is going to come with, before we started this call, you and I were playing with Zoom’s latest AI feature. You know, that’s built into the product, right? And so I think that you can’t stop it. It is coming. And so I think we’re going to find a lot of interesting situations to where it’s going to stress compliance, it’s going to stress protections and so forth. I think that’s going to be near term, and then there’s going to be a rise of cyber attack against those. I could develop what that is, but just trust me on it. I talk about why in the book, but I think that’s going to be something that we’re going to start seeing pick up at a great pace, just because AI is going to control the flow of money, access, power. I mean, there’s so many different aspects, which is the exact ingredients that drew in cyber threat actors for the past 30 years. I think it’s been 30 years, 20 years, 25, whatever it’s been. I’m losing track. So I think that’s going to be a big piece of it. But the other thing that I talk about in this book that I think is a big thing is the transition from AI as a tool to an AI as a member of the workforce. Now, the last chapter of the book, I talk about that a little bit.

 

Max Aulakh 31:20 – 32:05: I don’t know, have you had a chance to document it? I haven’t gotten to the last chapter, but it reminds me of the latest release from, you know, the whatever, get your own custom GPT. Essentially, you know, your own member that is very much trained for whatever use case you want, you know, and that’s why I kind of mentioned maybe the future is already here, right? That’s kind of what it feels like, and I know that just happened a couple of days ago, so Yeah, I’m not done with it yet, Joel, but man, I appreciate you writing this book, because I think there are so many leaders from different walks of life and different careers that are going to find this book very beneficial to their journey into learning about artificial intelligence.

 

Joel Yonts 32:05 – 33:59: Well, fantastic. Well, it was certainly something that I wanted to get out in there in the space. I’m very passionate about this because I think we’re going to run into some pretty significant issues. I think that AI is going to be wonderful. I think there’s going to be a lot of great things we gain from it. But there is going to be some significant pain points. And I really think we get to write our future a little bit. And it’s going to really depend on, do we invest in smart protections, cyber protections? Do we think through what the ramification of AI in the workforce is? I’m not going to, I don’t have time to build the case for why. I think it’s going to be sooner than we think. But Microsoft ColdWallet is going to, for example, is bringing that with next year is one example of beginning that heavy transformation. It doesn’t take much past that to start having AI as part of the workforce. And it’s going to really create some disruptions unless we’ve started thinking through the regulatory, the compliance, how do we have oversight, how do we allow the autonomy to get the value out of it with efficiency, but still have enough control to protect. And there’s a lot of different strategies and thoughts around it. But it’s one of these things that we’re not going to be able to just set it on its own course and let it work itself out. Because there’s two different kind of balls at this point is that we not only have just the chaos of technology proliferation, but we do have the cyber threat actors, the nation states that are an active adversary in these things. So we’ve got these threat actors that are going to be using the same technology or weaknesses in technology to orchestrate, which is going to create a really interesting mix. I think that if we get ahead of it and we start going after this aggressively now, we can be in a good spot. But if we don’t get serious about this and we find ourselves behind the game, that’s a tough spot to be in.

 

Max Aulakh 33:59 – 34:31: It is, Joel. And like I said, there’s no way to get ahead of it without the fundamental education that you have put out. Joe, I wanted to thank you for this. I think for those that are listening in, get a copy of this on Amazon. I was one of the lucky ones to get a copy of it without any cost to me, Joe, so thank you so much. But I’m still going to go buy one out there. And man, I just appreciate you writing about this topic. And man, I’m having a lot of fun reading it and looking forward to, you know, the second next version of this book.

 

Joel Yonts 34:31 – 34:36: Well, thank you, sir. It’s always a pleasure to chat and I look forward to future conversations.

 

Max Aulakh 34:39 – 34:50: Emerging cyber risk is brought to you by Ignyte and Secure Robotics. To find out more about Ignyte and Secure Robotics, visit ignyteplatform.com or securerobotics.ai.

 

Joel Yonts 34:50 – 35:04: Make sure to search for Cyber in Apple Podcasts, Spotify, and Google Podcasts, or anywhere else podcasts are found. And make sure to click subscribe so you don’t miss any future episodes. On behalf of the team here at Ignyte and Secure Robotics, thanks for listening.