Emerging Cybersecurity Risks

The Intersection of AI and the Military: A Discussion with Taylor Johnston, Former Chief of Innovation for the U.S. Air Force

SHARE EPISODE

On this episode of the Emerging Cyber Risk podcast, our guest is Taylor Johnston, Chief Operations Officer at the USF Institute of Applied Engineering and former Chief of Innovation for the United States Air Force. Join us as we investigate the integration of artificial intelligence and automation into the air force and the wider military. Tune in to discover the potential applications and use cases, as well as the already existing benefits; the current focus on autonomous systems; and the similarities between AI and the atomic bomb.

The podcast is brought to you by Ignyte and Secure Robotics, we share our expertise on cyber risk and AI to help you prepare for the risk management of emerging technologies. We are your hosts, Max Aulakh and Joel Yonts.

The touch points of our discussion include:

1. Where AI fits into the US government’s operations
2. Balancing efficiency and effectiveness when applying AI to military settings
3. Adoption of robotic process automation across sectors
4. The potential of autonomous systems in the military
5. The interoperability of different AI models
6. How AI mirrors the atomic bomb
7. The evolution of regulations in the military and how they apply to AI

Taylor Johnston Bio:
Taylor Johnston is currently the Chief of Operations for the USF Institute of Applied Engineering where leads a multi-disciplinary team in solving complex problems for the Department of Defense, US Government Agencies, and a variety of businesses. Prior to this, he served in the United States Air Force for over twenty years, most recently as the Chief of Innovation. Across his career, he has led diverse teams across different cultures within government and the Air Force, specializing in projects involving collaboration with private sector companies seeking to innovate with the military.
Taylor Johnston on LinkedIn

Get to Know Your Hosts:
Max Aulakh Bio:
Max is the CEO of Ignyte Assurance Platform and a data security and compliance leader delivering DoD-tested security strategies and compliance that safeguard mission-critical IT operations. He has trained and excelled while working for the United States Air Force. He maintained and tested the InfoSec and ComSec functions of network hardware, software, and IT infrastructure for global unclassified and classified networks.
Max Aulakh on LinkedIn
Ignyte Assurance Platform Website

Joel Yonts Bio:
Joel is CEO & Research Scientist at Secure Robotics and the Chief Research Officer & Strategist at Malicious Streams. Joel is a security strategist, innovator, advisor, and seasoned security executive with a passion for information security research. He has over twenty-five years of diverse information technology experience with an emphasis on cyber security. Joel is also an accomplished speaker, writer, and software developer with research interests in enterprise security, digital forensics, artificial intelligence, and robotic & IoT systems.

Joel Yonts on LinkedIn
Secure Robotics Website
Malicious Streams Website

Max – 00:00:03: Welcome to Emerging Cyber Risk, a podcast by Ignyte and Secure Robotics. We share our expertise on cyber risk and artificial intelligence to help you prepare for risk management of emerging technologies. We’re your hosts, Max Aulakh 

 

Joel – 00:00:18: and Joel Yonts. Join us as we dive into the development of AI, evolution in cybersecurity, and other topics driving change in the cyber risk outlook.

 

Max – 00:00:28: Thank you everyone for joining us today for Emerging Cyber Risk. Today we have an exciting topic about artificial intelligence and how it applies to the Air Force, as well as perhaps classified environments. But before we get to that, Joel, how have you been? I know we haven’t heard from you and our audience haven’t heard from you in a while, but how are you doing?

 

Joel – 00:00:48: Oh, doing well, doing well. My book is going to the editor on protecting AI from cyber defense. So that’s a topic that I hope to talk about with the audience here shortly. But this topic around the Air Force and AI is something that’s been interesting to me for some time. Last fall, I had the privilege to hear General Olson speak. He was the chief AI officer, I believe, of the Air Force. And it was interesting. His comment was AI was the highest unclassified priority of the US Air Force. At least that’s what he said in the talk. And so I’ve been very fascinated about that ever since because I see a lot of potential applications. So I’d love to get into this today a little bit deeper, as much as we can go, because I know there’s areas we can’t go, and then talk about applicability broadly. So with that, I’d like to introduce our guest, Taylor Johnston. If you would just go ahead and tell us a little bit about your background, who you are and how you got to this point. And then I’d love to get into that topic.

 

Taylor – 00:01:45: All right, yeah, greetings. Great to meet you, Joel, and great to see you again, Max. My name is Taylor Johnston. I’m the former head of innovation for the Sixth Air Refueling Wing at MacDill Air Force Base. I’m a pilot by trade, funnily enough. I ended up getting into the innovation ecosystem back when I was commanding a unit called contingency response. So contingency response is in charge of, you know, setting up an installation’s logistical infrastructure anywhere, so from the DIRT strip in Northern Syria or to a hurricane damaged quarter prints and everything in between. I’m a pilot by trade, as I said, so I’ve flown C21, C130s and KC135s, but as the head of innovation, we got to manage about 43 different projects worth just over 10 and a half million dollars over the span of two years, and then brought in some of the new cool techniques with what we call TacFi and StratFi. So those are funding vehicles for small businesses, but that’s kind of been a nutshell.

 

Max – 00:02:37: Awesome, so tell us a little bit about where you’re at right now, right? So after you left the military, what are you doing right now? And give us a little bit of insight into some of the more innovation-based projects that you’re currently handling.

 

Taylor – 00:02:50: Yeah, so right now I’m the Chief Operating Officer for the Institute of Applied Engineering here at the University of South Florida. The Institute of Applied Engineering has one foot inside and one foot outside the university, so we are able to leverage university talent and expertise to solve problems from the war fighter. So we currently have an $85 billion IDIQ with the Special Operations Command, and we get task orders on that. So a requirement coming from SOCOM, they’ll articulate it in a task order, and we’ll go out and solve it, either internally through our own internal engineers, or we’ll find the right PhD, either at the university, or we have an academic consortium of about 30 different universities out there that we can tap into also. So it’s really about providing the best solution from the right expert to help the war fighter.

 

Max – 00:03:37: You know, a lot of people who are not familiar with the government circles, you mentioned TacFi, StratFi, I’m familiar with those terms, but I don’t know, Joel, those are probably foreign terms to you, right? Break the ecosystem down for us, right? And where does some of this artificial intelligence, where does all that fit in for the government?

 

Taylor – 00:03:56: So cyber is a really tough market to get into the DoD, right? So the DoD and the Air Force, part of our security of those networks, it was what we call an information level. So we have multi-different networks. We have the basic type of network. And then we have an information level four, where we can put what we call controlled, unclassified information, i.e. Some of just our day-to-day emails are on that level. And we have to keep it at that network security. And it’s not really helpful testing grounds for artificial intelligence or for small businesses, for that matter, to come and work with the DoD because we keep our things so close hold, because it’s such a massive network that as soon as I expose one chink of that network, the network vulnerability just goes out the window, right? I mean, it’s just so vulnerable because it’s so massive. So we’re very close held with what we can test and what we like to do with small businesses. And that gets to the reticence. And what you were talking about, Joel, is, yeah, we see AI as this magic solver for a lot of our issues, but we don’t know how to implement it because we’re so worried about network vulnerabilities that come with AI that exists out there. That makes any sense.

 

Joel – 00:05:08: Oh, it does. And when I think about some of the challenges, well, one, I know AI is exploratory in general. You don’t sit down and know all the parameters you’re going to put into a model at the moment you go to design. It works differently. The data kind of drives the design to some degree. So you get that exploratory nature, but you also need a lot of data in order to train this model. So I would imagine those two things is probably what you’re talking about. That would be challenging to bring in an outside organization to work on those terms, I would imagine. 

 

Taylor – 00:05:39: Yeah, and the lovely thing about the DoD also in the Air Force is the amount of data and the data labeling. So training that model and treating that algorithm to the specific data. So one of the things I got to do, this was back in 2014 and 2015, was work on one of the enterprise research problems around readiness. And we incorporated 37 different systems, different IT systems to provide an automated readiness on a specific unit. So looking at their personnel systems, looking at the training systems, looking at the equipment systems, 37 different systems. And of those 37 systems, about 30 of them identified a unit in a different way. And you’re like, oh, because training that model and getting that model right to identify the right sort of data, which is the right sort of data in those different systems is just a pain in the rear itself. Just the data labeling. But you can train that algorithm to figure that out.

 

Joel – 00:06:31: Continuing on that thread, though, the capabilities, if you get it right, one of the things that I’m seeing across the board is for certain types of problems, AI can see things that humans can’t see, or it’s certainly faster than a human can see them. Where do you feel like that’s going to be the biggest enabler for the Air Force?

 

Taylor – 00:06:48: You’re completely right that AI can see problems before we can see them and can interpolate data that we hadn’t even thought about, you know, the correlation there. An AI algorithm is always looking for efficiencies. The Air Force and the military itself most often associates itself with effectiveness over efficiency. So there’s a data incongruence, like so if you were giving that AI to a large multinational corporation such as Amazon, it’s gonna look for the, how can I save a minute here or a minute there, a minute there? There’s a fundamental kind of difference between, okay, I’ve got this, I’m looking for efficiency, but it may sacrifice effectiveness. We can look at the air tasking order timeline, right? So it goes from target identification to validation and then cycling it into, do I actually have the right aircraft to put the right bomb on target? And do I have the right refueler to fill that fighter or bomber to put the right bomb on the target? The algorithm itself, and this is how you train the algorithm, may say that, you know, this is not an efficient way of using resources. So don’t even do it versus the validation maybe yet. We’re wasting resources, but we’re still being affected because we’re still putting the warhead on the forehead, if that makes any sense. But I think in the target acquisition process, AI can definitely play a role. We are a very large logistical infrastructure and organization. If you start thinking about the amount of permanent changes of stations or moves in the military, I know it’s always in the news, every summer military is doing moves and everybody’s doing it. So I think over a 21 year career, I moved 11 times.

 

Max – 00:08:20: Yeah, I think that’s where we’re going to see the first impact and lift is in my experience on prior Air Force Guide 2, there’s a lot of back end administrative stuff that just happens whether it’s going from one base to another or air travel plans, regular plans. I think we need to see improvements on that side before we take it to autonomous aircrafts to start to make decisions through an algorithm that typically a human is making, especially when it comes to target acquisition and any kind of a weapon system. But I don’t know. I mean, we’re moving lightning speed, right? I have no idea what the adoption curve will be on the other side for those kinds of things.

 

Taylor – 00:09:02: And we’ve already seen it, Max, on the order validation. So part of that PCS process, I think you remember to just validate your orders to move, took three weeks to a month to validate it. Now the Air Force Personnel Center has got it under a week because it has an AI that’s validating those types of information because a lot of these moves are pretty standard, right? You knew that you were going to this base to this base, that you didn’t have an exceptional family member, you didn’t have any of these weird caveats, and your orders were very simple. And they’ve started to do that in the administrative sections. And I think you’re right that that’s where those AIs are first going to get on, is in the administrivia to help speed up processes.

 

Max – 00:09:43: So you’re saying you’re already seeing some of this take place internally. And a lot of that, if I’m just a soldier and I’ve got to go from base A to base B, whatever orders I got to get cut, most of that work is unclassified, right? Yep. So I think that’s where the testing ground is, Joe, right? We’ll start to see it in normal things, normal day-to-day lives is what we would consider, I don’t know, have the AI pay my Amazon bill.

 

Taylor – 00:10:10: And I think there’s a clarity to be made on automation versus the intelligence, right? You’re looking for some intelligent automation. We’re just getting at that basic level of automation right now and that basic level of implementation on that automation because there’s an investment cost and time to train the young airmen on those automation tasks. So RPA and getting them to get sort of robotic process automation and getting them to automate those processes takes time and takes adoption. And we are terrible as an institution of adopting some of those more corporate tasks because we have that backbone of civilian task force back there.

 

Joel – 00:10:48: I mean, that makes a lot of sense. That’s what we’re seeing on the private side as well. One of the convergence I think is gonna happen is RPA is gonna get a facelift really soon. And it’s already bleeding in. And what we’re going to see is it’s going to move from simple automation of logic to rational agents, which is the true model of AI these days. The Turing test is gone, rational agents is kind of where we’re moving forward. And a rational agents can suddenly do more than just the five or seven very strict programmed automations, but can make decisions and can function more like part of the staff. Do you see that on the rise in the near term for what you’re doing in the Air Force and what the Air Force is doing?

 

Taylor – 00:11:32: I would love to see that on the near term. At MacDill, we had Microsoft down teaching RPA. There is actually an RPA center of excellence in the Air Force that does road shows to teach RPA using both the Microsoft suite of products and UiPath as going from there and just pushing it more and more. But I think it’s still at the early adopter phase, which is frustrating because you see the benefits and you see it just saving time and knocking time off your calendar. But again, it’s tough to get past that early adopter, that phase, which the Air Force is in. I’d love to say that we’re three or four years away from full adoption, but I think that’s still dreaming. But the data scientists, so there is an Air Force specialty code for data science. They’re all about the automation of a lot of the data flow. And I think that’s, again, where you’re going to see more of that AI creep into is in the data analysis, how you’re intimating it earlier in that data analysis and providing different viewpoints and where that data can go.

 

Max – 00:12:29: Joel, before we move from this, I’ve heard you say this term before, and you know I’m going to ask you about it, rational agent. For our users that are listening in, that’s a very specific term when it comes to artificial intelligence. I know we had an attorney, we talked to an attorney, how they would even manage this sort of concern. What is a rational agent? How would you define it?

 

Joel – 00:12:49: Well, we used to say a system is going to be truly artificial intelligence if it could pass the Turing test. Well, we’ve gone so much past that. The largest LLM model now can trick anybody into thinking it’s a human if you didn’t know it was, say, chat GPT. So rational agent is the other theory. And there’s a lot of description that goes into it, but ultimately it’s saying if a computer agent, a digital agent, can operate in a less than pristine environment, more like a real world environment where it’s not prescribed, and to make decisions to get best outcomes when there’s really not a best outcome, really, it’s the best case scenario. So thinking in the gray shades of black and white. And so it’s that combination of perceiving the environment in its raw form and then to make hard dischoices between A and B when there’s no clear winner. That’s a rational agent in summary, I would guess. 

 

Max – 00:13:46: Yeah, it reminds me of in the military, we call them decision support systems and decision theory, right, to some degree. But yeah, I don’t know. I mean, this is kind of getting into the topic of autonomy. So how does somebody or something makes a decision without human intervention? Earlier, Taylor, you mentioned, hey, we’re working on some autonomy, right? Like, we have some projects around autonomy. Talk to us about that because I think that’s really what this rational agent theory is getting towards.

 

Taylor – 00:14:14: Well, I mean, to be fair, most autonomous vehicles, so if you get your standard UAV, unmanned aerial vehicle, little drone, your quadcopter, if it loses connection with you, so it’s broken connection, there’s some automated processes already on there that it’s going to fly back to X point and start orbiting. Those are already existing. And the SOCOM folks have already started talking about as a collective autonomy. So how do we get a whole bunch of autonomous vehicles together, give them the right algorithm so they can work together? So again, a rational agent between and rational decision-making between autonomous vehicles that may have different vendors, because you don’t want to be all on the same vendor, but they’re going to accomplish that same task. So how do we get to that point where I’m not, you know, sole-sourced into that individual autonomous vehicle? And then how do I bridge? So one of the things that I would say that the US military is very good at is a thing called combined arms, right? We know how to map the individual soldier to the mechanized thing, to the ground, to the satellite support, to the aircraft flying overhead, and everybody’s working as a team, but how do you do that with autonomy? So how do you provide an algorithm, how do you provide that rational agent like Joel was talking about to integrate those autonomous services? That is the question that is next on the horizon.

 

Max – 00:15:39: Yeah, and I think one of the things we try to aim on the military is interoperability and standards and open systems, things like that, and essentially having an agreed upon protocol. And how would that work within artificial intelligence? Man, I don’t even think anybody’s thinking about that right now, because we’re still learning what this thing is, but how do we make it interoperable?

 

Taylor – 00:16:01: Well, and how do you train it too, because then in certain of these systems that some of these systems, at least right now with the interoperability, there’s classified, unclassified, and then there’s stuff that’s in the TS, they’re all working together without knowing how they’re working together. But as you start to work multi-enclave, how are you training that algorithm across multiple enclaves so it’s making the same decision because it’s had the same training, because it may have, I mean, Joel brought up an LLM, right? So the LLMs that have got the breadth of information around the entire web on the unclassified side may come up with a different answer than an LLM that’s being generated on a classified network to the exact same question.

 

Joel – 00:16:42: Yeah, see, I get excited about this topic because I see so much potential in the future. And Max, when you were talking about how would this roll out, talking about the risk tolerance of the Air Force, it goes into what you were saying as well, Taylor, is one of the things that we need to think about is stop thinking about AI as a model. These are digital entities. It’s not like I can only hire one staff person. I can have a thousand. I can have lots of different models that are trained in all kinds of different areas and then have them work collectively like they’re functioning as one entity seamlessly because they’re digital. And so I think one of the things when we’re starting to look at how this rolls out is to start to break this problem down. And I just kind of want to throw it out and see what your thoughts on is that if we could solve these problems that you were talking about in bite-sized chunks and build individual models and then just work out a communication way to connect them all, I mean, that’s a pattern that could get us faster adoption into this potentially.

 

Max – 00:17:41: I think the military or the Department of Defense, somebody would have to step in to build the ground rules because at least from my understanding how it works is the problem gets defined and it gets thrown to 100 contractors. They’re building their own, you know, hey, we think the world looks like this, right? Everybody’s operating in a silo. So it’s going to come down to, in my opinion, some sort of a interoperability protocol, some sort of data exchange mechanism. Some sort of an effectiveness check that’s critical, but I don’t even know what that looks like. Taylor, have you seen anything like that in terms of just interoperability of these kinds of models? Is anybody talking about this?

 

Taylor – 00:18:21: I don’t think so, at least not that I’m aware of. And they probably are, it’s just I’m not aware of them. At least I hope they are, and I’m not aware of it. Because then if you start talking about that, if you’re dealing with standards across the board and what you were talking about, you’ve also got to talk about the protection of those models and the protection of that system and how the vulnerability of that is too, because how do you prevent that system? Because now there’s multiple pieces to that system. What happens with one hallucination on one piece of that system? And does that hallucination replicate across the system? That would be my worry.

 

Joel – 00:18:55: I mean, we could go further into it, but we’re starting to get into the theoretical side. But I guess the one thing I’ll challenge both you military guys on is that, I have the utmost respect for the military and all that’s done by everyone that’s participated in that. But when I look at it, it’s not just we’re trying to get more efficient. We also have active adversaries that are advancing. And I would imagine that one of the drivers is, what happens if another nation state has a different risk tolerance and they’re willing to take more risks, therefore they get more advanced capabilities sooner because they’re willing to take the hit.

 

Taylor – 00:19:29: For the listeners out there, Oppenheimer just came out while we’re recording this. And there’s a lot of equivalency there, right? With the atomic race and creating, for lack of a better term, AI can be a weapon, right? So we’ve created a weapon, we’ve created a capability. Did we create it the right way? And to your point Joel, are we putting too many left and right boundaries on it that we’re not gonna be getting ahead of our adversary and how is our adversary using it that we were already creating, we created a great weapon and how are we creating the shield next?

 

Max – 00:19:59: Man, this kind of topic is always mind boggling to me because I don’t know how we get ahead of this. It’s definitely gonna take, of course, the technical depth, but in my opinion, we have to have a lot of legislative changes in the way we buy things, the way we procure things. And being a cyber guy, working in the military with all these things, we call them accreditation packages. As much as I’ve done that work, it’s old and outdated. It doesn’t fit the modern times. On the commercial side, Joel, it’s all the GRC work that we’ve gotta do, right? At the military, you know, we look for these effectiveness measures to the nth degree on everything, things that don’t even matter.

 

Taylor – 00:20:38: Yeah, and I mean, just in the past five years, I would say five or six years, you could argue that the military software factory has made giant leaps and bounds from the FedRAMP process to the CATO construct, but we’re still five years behind because just like you intimated there, Max, we have no guidelines or even requirements to put an AI or an LLM onto an unclaimed network. And you know we’re just gonna mess it up, to be frank.

 

Max – 00:21:07: Yeah, so right now, even if the capability is there, there’s no way, unless there’s legislative changes for the military to use it, which is terrible. So what ends up happening is shadow IT. Hey, I’m going to use it. I’m going to stuff data that I’m not supposed to, maybe even classified data and make my job easier, but nobody’s going to know it’s me. And then I’m out. Right. So we need a proactive solution. We don’t know what that looks like, but I firmly believe we somehow need this really badly within our military units.

 

Joel – 00:21:39: So I think the same challenge in a different lens is going to be happening in the private sector because companies are going to compete against each other. And if company A comes out of the gate with more AI enablement, they may bring new products and services to market and out compete company B or they may crash and burn and cease to be a company. I mean, either outcome. But do you think that if we can figure some of this out and the risk tolerance and hammer out some of this in the private sector, that’s how it’s going to make it as an example to be reabsorbed? Or do you think it’s going to come out of the labs?

 

Taylor – 00:22:12: That’s a tough question. That’s a little bit of chicken to the egg argument right there. I think Max has got a point, and it may not be legislation, but regulation I think is the right probably term there. The DISA regulations and their requirements have to be reshaped. And unfortunately due to the government, we don’t have the AI expertise and those software engineers inside the organizations that make the rules. So in a private company, you’re gonna hire the right people to create the right structure where your AI and your algorithms and what’s gonna happen. And you make your own essential rules because you’re running your own network. And there’s a financial incentive for you to hire the brightest software engineers that are creating the best artificial intelligences. There’s not that incentive right now on the DOD side on the regulatory side. So we’re kind of shooting ourselves in the foot.

 

Max – 00:23:05: Yeah, it’s a tough, tough problem. So shifting over to that Taylor, so now you’re on the commercial side, right? You’re on the other side. And I’ve always wondered like, and I do believe that regulations would evolve. I just hope that they evolve fast enough before we’re all dead.

 

Taylor – 00:23:22: You would hope that there would have been some regulations on social media activity and social media content, but we were saying that 15 years ago and there’s still nothing out there about it.

 

Max – 00:23:32: Yeah, yeah, I think we just recently caught on to ByteDance or TikTok or whatever, right? On the DoD network. And so, yeah, I think on the commercial side, right? Like this is where Joel, you mentioned, is it gonna be done in a lab or is it gonna be done in private sector? I think it’s gonna be a little bit of both because the private sector, the Googles and the Amazons of the world, they’ve got a very different motive. But when it comes to like Air Force Research Labs and other labs, we’re building things, at least from what I see with a very different intent, effectiveness, really, and maybe even ethics into this whole thing. Who decides what is right and wrong? Who is the arbiter of truth? I can see somebody wrangling with those kinds of problems more so on the labs than they would be on the private side of the house. But I don’t know, if you’ve seen anything like that, Taylor, where labs are thinking about what are the right kind of effectiveness markers, how to even build an effectiveness model into these kinds of things. Those are the things I think our government would step up to try to build.

 

Taylor – 00:24:36: Yeah, there’s a good division of people thinking about the ethics. I mean, you go back to what we were originally talking about with target acquisition. I mean, at what point do you put human in the loop versus an automated loop and making sure that there is some left and right boundaries. But the good and the bad right of AI is essentially you don’t want it to have boundaries that you want it to keep evolving and learning as it goes, that there is no left and right boundaries on. So how do you put an ethical boundary on it? No idea on how to do that.

 

Joel – 00:25:03: Yeah, I mean, I think that we start looking out to the future. There’s a lot of ways I believe we can do it. I think we can build a model with ethics and have the model interactive models. There’s lots of different patterns when we get into it, but there’s a large ramp. But what I see is the technologies is advancing pretty fast. And what we’ve seen in the history of AI is there’s always been periods where it went fast and slowed down, but it seems to maybe get momentum now. But technology seems to be outpacing us, you know, in the legislation and in the know-how and all the guidelines and policy that’s across the board, public and private. And that’s not a problem that’s getting better anytime soon, it looks like.

 

Taylor – 00:25:42: Now there’s almost a Morseian law about the rapid pace of AI that it’s starting off slower and it’s just getting exponentially better every month. It seems like.

 

Max – 00:25:51: Yeah, we’ve always known, right? Regulation is always like the rear view mirror, right? It’s already happened. It used to be, yes, we can catch up. We got a couple of years. You got a couple of months now. You got a couple of months, and then it’s changed all over again. So it’s almost like the regulatory cycle, it’s always gonna be in the rear. I personally don’t believe we’ll ever catch up. It’s almost like the way we write legislation needs to change to some degree, be a little bit more open. And then also, in my opinion, it needs to provide a lot more latitude to not interpret it in black and white manner. That’s what we do on the military. We take what’s supposed to be a guidance and we turn it into an instruction, and then we say yes or no. It’s done.

 

Taylor – 00:26:34: Yeah, to your point there, Max, the smartest people to write the regulations around it aren’t necessarily financially inspired to write the most ethically sound regulations that are around it. Because you could go to Congress, I mean, I’m not pinging anybody in Congress, but the standard congressman probably has no idea where the bounds should be on the AI. So they’re going to take their inspiration from industry. I don’t know, Joel, what do you think?

 

Joel – 00:26:57: Oh man, you know, when I hear this topic and I’ve spent a little bit of time working in, well, I keep referencing the book, not trying to, it’s just been my life. And so, it’s not like I’m trying to promote anything here. I spent some time talking about the future of AI. And one of the things that when I was working on the future chapter was that if we follow technology with the processes and practice development, we’ll never catch up, we’ll always be behind. So we need to project out ahead of where we’re going to be and start building that policy now. And so, when I talked about the rational agent, we’re worlds away from it, but somebody should be writing that policy and thinking about that legislation now, because it won’t get here in time if we don’t. So thinking about down the road, and there’s a couple of big important things that we’ve got to hammer out, that may be five years in the making, but unless we start now, we’re not gonna be ready for it when it hits us, is what I would say. Maybe that’s the obvious statement.

 

Max – 00:27:55: I agree. So Taylor, we wanted to thank you. This has been a fascinating conversation. We always like hearing from somebody who has been on the other side and now is joining us, on the commercial side of the house. Yes, okay, we can’t really make a difference because of the legislation, but I do think there’s a lot we can do, because the government and the military does rely on the private sector quite a bit in order to get ahead. Because I don’t believe we’re gonna be able to maintain, any kind of talent to actually do this. So they’re gonna be pointing out words, right? They’re gonna be pointing out words to bring some of those things in-house. But before we leave, any parting thoughts in terms of where you think our future is headed when it comes to artificial intelligence and adoption of this kind of technology.

 

Taylor – 00:28:40: I mean, we talked a little bit about that collective autonomy and how that collective autonomy works. The other part more away from the autonomy on the data side and information, a lot of our security is based on the containerization of data, right? So it is secure, but it is containerized in a specific enclave, whether it be a top secret piece of information, we call it SCI, right? Secure Compartmentalized Information. So it’s compartmentalized. The trick, and I know that there are great minds working at this, not only in the government sectors and civilian sectors, how can we train an AI to bridge multiple enclaves and multiple containers without giving up the inherent security of those containers? That I think is the next step before we get to that collective autonomy, because that’s a wicked problem. That’s the next kind of evolution of where we’re gonna see some of these models go is, yes, they can bridge multiple enclaves, multiple containers, yet still retain the integrity and classification of that container. I see that happening probably in the next year, to be frank, and that’s gonna be a big leap.

 

Max – 00:29:45: Like within a lab environment, or you think actually prove a concept working and deployed? I mean, that would be pretty fascinating.

 

Taylor – 00:29:51: I am hearing rumblings of some multinationals that have already kind of figured out the multi enclave between secret and unclassified. As you get higher, that’s going to get tougher. But I think that there’s some rumblings out there that they’re already starting to see some progress on there.

 

Max – 00:30:08: Yeah, I think that’s where we have to go. Back in the day, you mentioned SEI, we got to work in skiffs. Those are closed off buildings that nobody can get into unless you have the access. And now we’re seeing endpoints that are classified. So the doors are opening up a little bit, which I think can create room for innovation.

 

Taylor – 00:30:27: And it spans that vulnerability too, that risk to reward. So yes, there’s a big reward there and the capabilities that AI provides, but that risk man, woof.

 

Max – 00:30:36: It’s scary for the military guys. I know it’s scary for me, like, but that’s where we gotta go.

 

Joel – 00:30:41: Oh yeah, I think that is. And I mean, the hope is that this is gonna translate in actually saving lives and investing in military to save lives all around the world on both sides is what would be my hope that AI would bring to the table.

 

Taylor – 00:30:57: Yeah, well, again, thank you, gentlemen, so much for having me today. It’s been a pleasure.

 

Max – 00:31:01: Yeah. Awesome. Well, thank you so much. We love talking to you and this was fascinating. Thank you so much.

 

Joel – 00:31:06: Very good conversation.

 

Taylor – 00:31:07: Thanks.

 

Max – 00:31:09: Emerging Cyber Risk is brought to you by Ignyte and Secure Robotics. To find out more about Ignyte and Secure Robotics, visit ignyteplatform.com or securerobotics.ai.

 

Joel – 00:31:20: Make sure to search for Cyber in Apple Podcasts, Spotify, and Google Podcasts, or anywhere else podcasts are found. And make sure to click Subscribe so you don’t miss any future episodes. On behalf of the team here at Ignyte and Secure Robotics, thanks for listening.