‹ All episodes

Emerging Cybersecurity Risks

Transparency and Collaboration: Driving AI Adoption in the Military and Government with Aaron McCray of the US Navy

👉 How does the DOD introduce technology into different government branches?

👉 What are the challenges in adopting AI in the Armed Services?

👉How does AI ethics play out in AI adoption?

SHARE EPISODE

On this episode of the Emerging Cyber Risk podcast, our guest is Aaron McCray, a twenty-six year veteran of the U.S. Navy. The podcast is brought to you by Ignyte and Secure Robotics, where we share our expertise on cyber risk and AI to help you prepare for the risk management of emerging technologies. We are your hosts, Max Aulakh and Joel Yonts. 

Join us as we discuss the challenges and potential of AI adoption in the US Navy. Aaron highlights the importance of collaboration between the military and commercial sectors and the need for validation and testing while adopting AI. He also touches on ethical considerations, potential applications, and the importance of transparency and prioritization in driving AI development. 

The touchpoints of our discussion include:

  • Viewing AI from the Navy’s lens
  • How does the Department of Defense introduce technology into different government branches?
  • The challenges that prevent rapid AI adoption
  • Does AI ethics constrain AI adoption in the defense sector?
  • Aligning the slow pace of adoption with the rapid evolution of AI technology
  • The role of private sector companies in bridging the adoption gap
  • The next steps for DOD to accelerate AI adoption

 

Aaron McCray Bio:

Aaron is a highly accomplished and experienced information security leader with a proven record of accomplishment in developing and implementing enterprise-wide strategies aligned with business objectives and regulatory requirements. Adept at collaborating with and managing cross-functional teams to identify, assess, and manage risks to information systems and data. 

Aaron McCray on LinkedIn

 

Get to Know Your Hosts:

Max Aulakh Bio:

Max is the CEO of Ignyte Assurance Platform and a data security and compliance leader delivering DoD-tested security strategies and compliance that safeguard mission-critical IT operations. He has trained and excelled while working for the United States Air Force. He maintained and tested the InfoSec and ComSec functions of network hardware, software, and IT infrastructure for global unclassified and classified networks.

Max Aulakh on LinkedIn

Ignyte Assurance Platform Website

Joel Yonts Bio:

Joel is CEO & Research Scientist at Secure Robotics and the Chief Research Officer & Strategist at Malicious Streams. Joel is a security strategist, innovator, advisor, and seasoned security executive with a passion for information security research. He has over twenty five years of diverse information technology experience with an emphasis on cybersecurity. Joel is also an accomplished speaker, writer, and software developer with research interests in enterprise security, digital forensics, artificial intelligence, and robotic & IoT systems.

Joel Yonts on LinkedIn

Secure Robotics Website

Malicious Streams Website

Max – 00:00:03: Welcome to Emerging Cyber Risk, a podcast by Ignyte and Secure Robotics. We share our expertise on Cyber Risk and Artificial Intelligence to help you prepare for risk management of emerging technologies. We’re your hosts, Max Aulakh. 

 

Joel – 00:00:17: And Joel Yonts. Join us as we dive into the development of AI, the evolution of cybersecurity, and other topics driving change in the cyber risk outlook. Welcome to another episode of the Emerging Cyber Risk Podcast. My name is Joel Yonts, I’m your host. Today, we’re going to talk about AI and the US Government. It’s going to be an interesting topic. I think it’s going to be applicable to everyone, not only those people who are active in the US Government but everyone who has a vested interest, which is pretty much the population of the US and probably the world at this point. Got an exciting guest today, but of course, our host Max Aulakh is on as well. Max, I know you’ve had a long career with the Department of Defense and then afterward with the US Government; I imagine this topic is going to really resonate with you as well.

 

Max – 00:01:01: Yeah, it was an honor serving, and the Government is always trying to set the standard in a lot of things. Sometimes, they do a fantastic job; other times, they don’t. So, I’m looking forward to today’s conversation, Joel. 

 

Joel – 00:01:13: Very good. Also on the show, we have Aaron McCray with a career in the US Navy. I think he’s going to bring a lot of great insights to this conversation. Aaron, welcome to the show, and tell us a little bit about yourself.

 

Aaron – 00:01:25: Well, thank you, Joel. Thank you, Max. It’s an honor and a pleasure to be here. And I’ve got to say right off the top that I’m going to speak about the Navy and the use of AI, but I’m not speaking for the United States Navy. I’ve got to be very, very careful. I do hold a pretty high clearance, and I’ve just got to be careful what I say, but I’m what you call a citizen sailor. So I did my time on active duty, got off of active duty, went into the reserves, and embraced my civilian career so I could raise a family, et cetera. Max knows exactly how that is, himself getting off of active duty, but it’s afforded me some incredible opportunities. So, in my civilian career, I started out much like a lot of folks in IT, right? Help desk, service desk, work your way up to server, network administration, to eventually doing Chief Information Security Officer roles, which is what I’ve been doing for quite some time. I think I’ve been in this industry for nearly 30 years as an IT Professional and Security Professional. Done a lot of consulting over the last seven to ten years for a lot of global organizations dealing with a lot of global regulation. So whether you’re a business in the United States or in Europe, you have to deal with regulatory compliance. And that’s really where a lot of my background in risk management governance and compliance stands on the civilian side. Military very similar. So, in the United States Navy, I started out in the enlisted ranks as a radio man, doing signals and then eventually signals intelligence. And then, when I was commissioned as an officer, I was commissioned into what they call the information warfare community. And so that is really focused on like five different domains that we do information assurance, we do information operations, which is planning for the battle space, looking at our adversaries, their use of technology, those types of things, making sure we have a ready defense and how would we pursue an adversary. So, a really interesting background. There is some overlap between my civilian and Navy career, but on the Navy side, I think I’ve gotten into some pretty unique things.

 

Joel – 00:03:21: Wow, an extensive background for sure. And I’m sure going to have a lot of great conversations about this, and about to your point, I’d love to probe in as far as we can, but of course, we want to make sure we’re cognizant and feel free if I ever ask you a question that crosses that line, let us know. But I would love to know, I know what it looks like on the civilian side when AI just blew up in the past few months or the past year. It’s been pretty shocking, and everyone’s been a bit scrambling to try to catch up and figure out what to do with it. Can you give us a view of what it looks like from a Navy perspective?

 

Aaron – 00:03:55: Absolutely, so the military, for most folks, they don’t know that they really actually are somewhat on the forefront, cutting edge of a lot of these technologies. They’re just slower to adopt them than a lot of the civilian sector. The reason why is, is they’re a little bit risk averse because when you talk about for us in the Navy, right, we’re about projecting sea power, it’s great wars competition, we’ve got near-peer adversaries that we’ve got to deal with. You don’t take risks that can put your assets, your personnel, and your mission at risk; you just don’t. So it goes through extensive testing, which I think, in this case, when we’re talking about AI, is actually a good thing, right? You want to make sure that you’ve developed your use cases extremely well. You want to stay within those swim lanes, if you will, narrowly define your scope and make sure that you’re testing for effectiveness across the multiple platforms that you might be deploying your AI. So, in that case, the Navy got it right. Now, where the Navy, like most of the military and the DOD, has had challenges is oftentimes you operate in a stovepipe. You might have different groups working on different aspects of AI, and never the two shall meet. They don’t talk to each other; they’re not sharing information. You’ve got research going on that could help if there were some collaboration efforts going on. So, you know, in the early days, and let’s just talk about six, seven, eight, nine years ago, you could have the Office of Navy Research working on something. You could have some joint research going on with the NSA or other groups. Even there’s been some Navy partnerships with the Air Force, like Research Labs. So, the Navy has its own Research Labs as well. And so there’s some collaboration going on, but they don’t necessarily want to share what they’re doing, why they’re doing it, how they’re doing it. And so, from that perspective, I think it’s slowed down the progress of deployment of effective AI out to the fleet. Now, that’s just my personal opinion, but I’ve read a lot of retired Navy captains, who are the COs of their ships, or maybe even Commodores of a Battle Group, saying the same things, right? There is an immediate need for AI that could be very, very effective. And so that’s why I’m excited about talking about this because I think we’re at the forefront of some cool things.

 

Joel – 00:06:10: Yeah. And I think that when we talk about, again, in the corporate world, the pressure to adopt AI, it’s not just it’d be great if we could do things faster and more efficiently, but there’s a competitive edge. And you don’t want to lose the competitive edge to your competitors that may be adopting at a faster rate. Is that a concept in the US Department of Defense as well, because you’ve got other nation states that are adopting potentially faster?

 

Aaron – 00:06:37: I’ll say yes and no. You’re never going to be pressured by an adoption rate, possibly technologies or tools and tactics that don’t necessarily fit your mission and operational parameters for your particular branch of service. The Navy projects sea power, right? It’s all about getting our fleet out to parts unknown. We can bring the warfighter to coast, and we’ve got the aircraft, and we’ve got Marines we can deploy. So, from our perspective, it has to fit the Mission Set. What is it that we need AI to do? And if other countries, if the various state actors are developing AI for nefarious reasons, we’re not going to adopt those practices. We might adopt defensive practices using AI, but we’re still going to focus on what’s important to the fleet and what’s important to our mission.

 

Joel – 00:07:22: Certainly. Yeah. And Max, before we got into this discussion, we were talking about how the Department of Defense actually feeds technology and ideas and policies into largely the Government as a whole. I’d love to hear more about how that works. Go think our audience would be interested, too. 

 

Max – 00:07:37: Yeah. A lot of these programs, at least from what I’ve seen, work in a stovepipe fashion. You’ve got all of these Research Labs that are not necessarily collaborating. Whereas on the commercial side, you get feedback through basically the public, right? Open AI says, here it is; go test it, go break it, see what happens. So typically, what I have seen is once there is something of viability and feasibility, the Government will actually release this information under different forums. But it’s just hidden. Not a lot of people know how to find it. And then it’ll make its way through different things. And then, somehow, it’ll end up as a commercial capability. Things like satellites, for example, the Internet itself. That was a Government project out of DARPA and things like that. And even when we look at software that we’re all familiar with, like Tenable, for example, the Co-Founder of Tenable and many other Cybersecurity Professionals, they come out of the Government, and a lot of the work that is done on how to scan a network, how to do these things, eventually somehow makes it under different vehicles, whether it be small business innovation, maybe it’s just NIST reaching out to different stakeholders and building a standard out of it. I think when it comes to Artificial Intelligence, because all of the commercial practitioners are saying slow down, it’s not the cybersecurity people; it’s really the AI experts who are deep into this technology. I think the government is automatically thinking, how do we validate it? And I can see some sort of standard coming out, like a joint standard. Aaron, I don’t know if you’ve seen anything like that within the DOD or the Navy, but when we start to see those things come out, Joel, other organizations start to look at that as, hey, this is a potential standard for us to follow.

 

Aaron – 00:09:26: That’s a good point, Max. And from the DOD’s perspective, the standard hasn’t been released yet. Typically, you start with your instruction or your policy set, right? And then you derive your standard from it. So back in 2020, the DOD basically came out with its standard for ethical principles. How are we going to approach AI? How are we going to approach the research, the development, the sharing of the technology, and the information to make sure it’s transparent, traceable, and equitable? They wanted to add some accountability and responsibility to the process. They also want to make sure that AI is reliable. I mean, there are a lot of concerns, even on the civilian side, that the use of AI could actually have a negative impact or cause harm. And we hear about things like deep fakes, right? I can generate using AI photos of folks looking like they’re in a situation that they never were in or even taking live clippings of recordings of their voices. So there’s a lot of potential use that’s negative that the DOD wants to make sure that they can govern as they develop it. But with those standards, then, that is what, at least from the Navy’s perspective, is their guidepost or their guideline for how are they going to start to the development, testing, use, and then release out to the fleet of the AI, whether it’s helping within the decision-making process or it’s more along the lines of machine learning like a subset of AI.

 

Joel – 00:10:46: See, it’s very fascinating to me because I have limited experience with the Department of Defense, but I have been privileged to attend a couple of small conferences where it is talking about AI and the Department of Defense. And I’ve heard from top-level officials generals in the Navy, Marines, Air Force, and the Army talking about AI as the highest unclassified priority of the military. And I thought that was an interesting push. I hear when you’re talking about caution, but you’ve got this pressure to make it the highest priority. So, how does that work out day-to-day?

 

Aaron – 00:11:20: Well, I’m not actually in any of the labs, so I’m not sure that I can speak to the day-to-day of what the Office of Naval Research does or Navy Research Lab. But from day to day, it just means that there’s probably some bureaucracy there over the top of whatever it is that they’re working on, right? So you’re going to have your program managers, you’re going to have your technical program managers, you’re going to have your IA folks, information assurance, making sure that we’re within the guidelines, hitting the correct policies. What does that mean? It means things slow down. It doesn’t mean things speed up. It means things are done decently and in order. And unfortunately, it also means that we’re hitting, I think, in some cases, some unnecessary roadblocks. Let me give you an example. I was a Technical Program Manager down at Navy Personnel Command working on Navy transformation projects. That’s about all I can say at that point. But the folks that I was dealing with, like information assurance, enterprise information management, the folks that are enforcing the governance don’t necessarily have technical backgrounds. They may not even be military personnel. They could be Department of Navy personnel, they could be civilians, etc. So there’s a lot of education that takes place. There’s a lot of explanation so that you’re helping them understand what it is you’re doing and how it complies with the frameworks that have been outlined either by the DOD or the Department of Navy etc., etc. So that’s what I would expect from the day-to-day just from my past experiences is that it’s we build out our project plan, we have our release cycles, there’s going to be a lot of testing, but each one has to go through a stage gate of approval before we can move to the next. So we’re going to be a little bit behind, I think, our civilian counterparts in corporate America that are using agile. We like to say we use Agile, but it’s Agile with a caveat.

 

Joel – 00:13:10: Well, before I make a statement, I have the utmost respect for militaries and the labs and everything. All of the personnel that’s doing this innovative research, and I’m sure it’s far above my pay grade. I will say that I have heard, and you guys can validate, that the end of a commercial research project may be a working prototype, whereas the end, a government prototype, a lot of times is a 500-page document no one’s ever going to read. Is that a true statement, or is it not?

 

Max – 00:13:35: Yeah, Joel, you bring up an interesting point because I’ve always teeter-tottered between commercial work with institutions like banks and manufacturing, things that we’re all familiar with. So, Aaron’s actually played a heavy role on the commercial side. But then I’ve always had one foot in the Government, unfortunately, and sometimes fortunately, right? So, sometimes, the Government is looking for fundamental research, and they’ll turn everything into a paperwork battle. And when it comes to your comment about how we get commercial off-the-shelf capabilities right into the government, it’s really difficult. So, if we think about a car and if we want to take that car into Iraq or Baghdad, you can certainly put armor on that car. But at a certain point, it’s going to weigh so much that you’re going to think about redoing that whole car because the chassis is not fit, everything is not fit. When it comes to software projects, there’s a theoretical understanding that there’s high reusability. That’s a theoretical understanding. But when we actually get it into the classified space, lack of internet, lack of connectivity, it’s almost like treating it like a car where, hey, I need a different factory altogether in order to build software. Now, how does that impact Artificial Intelligence? I think that’s the exact challenge we have to figure out because Artificial Intelligence is, I’ll use it in Layman’s terms, it’s the wisdom of the crowd. And if you don’t have the crowd, how do you learn from large data sets? Some of these challenges, I’m not really sure how they’re going to be solved. Aaron, I don’t know if you’ve seen any of these kinds of conversations internally, but that’s what I’m thinking of, right? Like, so ChatGPT is getting a lot of feedback, but how is the government supposed to get a ton of feedback when it comes to commercial capabilities that could be used for classified missions, right?

 

Aaron – 00:15:24: I haven’t seen a lot of chat or conversation on these. A lot of that is kept very close to the vest, right? Compartmentalization is the term we use. And if I’m not in that compartment, then I don’t have a need to know, and they’re not going to share it. I do want to comment on something, Joel, that you did reference, which is, at the end of the day, you get a 500-page paper. That is true in many cases, but we’re talking about some of the most brilliant minds in these Research Labs. I mean, I can’t speak their language. They’re just so far above, right? They’re not interested in just publishing papers. I’ll just state that right now. And in fact, in March of this year, there was a wonderful article that actually introduced the Navy’s project called OpenShip. And the whole intent of OpenShip is to do what they call low-lift but high-impact development of AI and Machine Learning applications. So they’re not looking to just publish a paper about what it could look like with some cute 3D model pictures. They’re actually going to produce these applications. Why? Because it’s already been determined over the last three to four years that there is a critical need. Like many of us who are in the military, and Max, I don’t know if this resonates with you, but I tend to read a lot of retired officers that are O6 level and above because they’ve been involved with so much, the strategy, the planning, they came up through the tactical spaces. And so they have some great insight. And when they retire, they write some incredible pieces about what’s really going on and how the Navy can use something like AI. So, there was an article that I had read I just want to say a few days ago that caught my attention. And it was a Navy captain who was a fleet captain. He’d been to sea many, many times. He saw the problems firsthand. And he said, look, the key for the Navy is that we’re faced with life or death decisions. And this isn’t the days of the USS Constitution, Old Ironsides, where we might have weeks to prepare to make a decision because of the speed of the fleet and communication. No, we’re talking a matter of minutes, maybe seconds. And so, what is the challenge that we’re faced with? How do we help the decision-makers have good data and relevant data to the right systems at the right time to aid them in the decision-making process? In fact, in one interview with the chief scientist Army US Research Lab, Dr. Alexander Kauff, he heads us up. He said, look, here’s really the issue that we’re going to face with AI in the military moving forward. And it’s the human cognitive bandwidth that’s going to emerge as the most severe constraint on the battlefield. Hence, we need AI to assist in that process. And if you don’t think that our adversaries are already doing this and working quickly to get to that point, then we’re not paying attention. We’re asleep at the wheel.

 

Max – 00:18:09: Yeah, Joel, I was going to say, if you’re ever interested in the Government side, I know we talked about this, we could just use ChatGPT to write up all that paperwork. But yeah, there’s merit to Aaron’s comment. It’s not a publish-or-perish environment, right? Where if you’re maybe in academics, you have to publish. There’s meaning behind it. But yeah, somehow, we need to break through that to adopt faster.

 

Joel – 00:18:37: Yeah, and when I hear this adoption, there’s this scenario that I was just talking through yesterday that was a little bit absurd, but it sets up the topic of AI Ethics. And one of the conversations I was having was, would we ever give nuclear capabilities to AI? It just screams in my head, we would never do that. However, and you guys are the military background folks, our nuclear situation has been mutually assured destruction for decades now, right? And it was the concept that if you launch at me, I’m going to launch a retaliatory strike in time, and we’ll wipe each other out. But now, if, say, nuclear capabilities can be delivered so fast because of supersonic technology or whatever, then that breaks down. If I’m first to strike, I can wipe you out before you can decide to retaliate potentially. And the only answer may be to automate with AI. And so it’s crazy, but I heard a little bit of that in what you were saying there, Aaron, not Nuclear, but do you feel like that AI Ethics decision is becoming more constrained because of this speed issue we’re dealing with?

 

Aaron – 00:19:38: I don’t know if it’s constrained. I mean, when you were raising your points, my mind immediately went back to the 1980s, and I was thinking of the movie War Games. I don’t know if you recall that movie. It’s probably dating me a little bit, but that’s a perfect description of the Ethics. You’ve got AI taking over, getting ready to launch Nuclear warheads, and you have to have human intervention talk the AI down. We’re not there. The other one that came to mind is on the other end of the pendulum, which is leaving the decision strictly to humans. There was a band in the 80s that I really loved called Genesis. Phil Collins, I’m pretty sure we’re all familiar, but they had a video, Land of Confusion if you remember that. It had the puppets, right, from Jim Henson. But there’s Ronald Reagan waking up from a terrible dream, pushing the red button, and launching a nuclear strike. So you have both extremes here, right? We’ve got to land somewhere in the middle, right? So, let me give you some real-world examples that are near and dear to my heart. It wasn’t but a few years ago that we had two collisions out at PACOM and other areas with the McCain and the Fitzgerald, right? These are guided missile destroyers and just tragic, both of them. And I can’t get into the details of them, but I can’t help but think these things happened early in the morning hours. A lot of your key personnel are probably in their quarters. There are protocols; there are rules and engagement. There are lots of things that are part of the decision-making process that we could point fingers at and blame. But where could have AI potentially have assisted in detection and response and altering, maybe even navigation, automated fashion? I mean, there are a lot of things I think about where AI could have been very, very useful, and potentially, we could have avoided some tragic situations, right? Those are just some of the examples. Would I ever see AI, at least from my honest perspective, making decisions about when or when not to launch Nuclear warheads? I would pray to God never. I would hope that there would be reasonable minds behind it evaluating good data sets that AI has brought together and making good decisions. Where I see it really being beneficial, and honestly, this is what the Navy Research Lab is working on and some subset departments are, is like, what do we do with unmanned vehicles? Whether they’re aerial, subsurface, or surface, take your pick. Where could it aid in things like acquiring targets, evaluating if the target is one that we have been, let’s say, looking for, it is a 100% match, as opposed to thinking about what could happen if a human, and this has happened, unfortunately, where we’ve made a wrong decision and we now have collateral, and human collateral, and it’s innocent human collateral. God forbid, we’ve got to be able to get past that. Could AI help with those things? I absolutely believe so. They’re doing that research. They’re automating the dynamics behind acquiring targets, relaying that information, being able to make quicker decisions, and then if you got to prosecute the target, you prosecute the target. So those are just some of the examples that I’m thinking about. There are more, there are a lot more. How about preventative maintenance? How about using AI to actually go and do certain things to make sure that ships are optimal, that the systems themselves are up and running, and that there’s a lot that can be done. And that’s where my head is going, where the research that I’ve looked at, that they’re using AI and Machine Learning Language for.

 

Max – 00:23:04: So I think there’s a lot of opportunity, as we know. There’s very little about the Government that should be classified in terms of the Department of Defense and the mission set. That’s just my view. But a lot of our Government is pretty transparent compared to all the other world governments, right? We put out public records, we put out spending, we put out each agency’s mission, but what they’re actually specifically doing that might be protected. So, I do believe there’s a huge benefit to adopting outside of the classified mission.

 

Aaron – 00:23:37: Absolutely.

 

Max – 00:23:38: The challenge is we tend to apply the same rules, and we become overly conservative. And so I think there’s a huge benefit to us learning from the commercial adoption practices in order to get the government leveraging this capability, right, whether it’s decision support and things like that.

 

Joel – 00:23:57: Yeah. I mean, that makes a lot of sense. And by the way, before we move on, I’m not advocating nuclear control. 

 

Aaron – 00:24:03: That’s good advice. 

 

Joel – 00:24:04: It’s just an extreme example that I used. But. We’ll say that for later in the year when I throw that one out. But when I look at what you were just saying, and when we were talking about the speed at which, you know, Aaron, you were talking about that there has to be a level of assurance, you run the risk that you’ll never get it because AI is moving so fast. As soon as you get assurance on a particular model or platform, we’re five generations ahead because it’s been rebuilt so many different times. So it’s really going to be challenging to be on the cutting edge even in current technology, I would imagine, in this phase.

 

Aaron – 00:24:40: Yeah, but again, it goes to the use cases, right? The Military is not going to be on the cutting or bleeding edge of pushing technology out to the fleet. They’re just not, right? You have to have stability. You have to have resiliency. You have to have sustainability. Those things are always going to be critical to making sure that we have a ready fleet, ready to go. But where they can focus on and where they can adopt, Max, to your point, working with the commercial sector and release schedules is the way that the commercial sector produces, and releases products are things like intelligence systems, right? So, using AI or ML, Machine Learning Language, to look at how systems interact with humans and our cognitive functions to aid in how we do our jobs. So think about creating more efficiency, freeing up personnel to do more important critical thinking tasks or higher level tasks. That’s one of the ways they can do it. Looking at adaptive systems, which is basically looking at autonomous systems, like I mentioned a little bit earlier, or maybe even mobile robotics and looking at applications that can do some very basic intelligent decision-making on the spot. But it means that there has to be a good set of data that it’s generating or pulling from that it can make that intelligent decision based on the parameters defined for it. And then, finally, interactive systems. And these are things I know, Max, it’s near and dear to his heart. Things like natural language, right? Natural language artificial intelligence and the military have immense applications and potential. I can’t really get into it, but it’s important that we think about those and stay within those swim lanes because the adoption rate of that can be much, much faster than looking at something that’s theoretical and then going through a developmental process. And then how do we apply it? Where does it apply? Where does it make sense? And then how do we maintain it? So, staying within the swim lanes of what do we have now that this could support?

 

Max – 00:26:34: It’s either that or we wanted to leapfrog if we wanted to get ahead because that’s the big thing, right? AI can give a near-peer advantage or advantage over our competitors. We know that. So, as a country, the goal is how do we achieve that within the Government, within the Department of Defense? I mean, we’re all patriots. That’s what we should be somehow trying to figure out: is how do we give our country the edge? My personal opinion is that until that capability gets reclassified, even on the cyber side, we have advanced cyber capabilities at cyber weapons and things like that. But we don’t necessarily call it a cyber security scanning tool. We don’t call it Nmap. Might be built with some things. But I think even when we look at the context of Artificial Intelligence, it’s just got to be driven by things that we need. And then I could see the military even over-invest, like, hey, we need this to meet our mission. And Joel, I think that’s when we’ll start to see it. So it comes down to a little bit of messaging, unfortunately, right? There’s a lot of marketing and the buzz that has to happen. But I do see the military, as soon as they recognize its power and its potential, somehow taking that and accelerating it. We’re just in such a lag period right now.

 

Aaron – 00:27:50: Well, they are focused on that a little bit, Max. You talk about the warfighting capabilities. And let’s not forget the United States Navy is a warfighting force, first and foremost, and to your point, utilizing AI, this is public domain knowledge. But the Navy and the Marine Corps do have a strategy for what they call Intelligent Automated Systems. So if you think across all of the UAV platforms, how do we accelerate the development, the operabilization of it, the adoption of this type of technology so that it’s making decisions instantaneously? And from my perspective, it doesn’t preclude that there’s no oversight and governance on the back end when these things are taking place, right? It still has to happen. There still needs to be some sort of human governance. But the decisions are happening so quickly and potentially, you know, the loss of life that could be saved by using IAS, these Intelligent Autonomous Systems, it could be incredible. You think about all these movies that have these robotic fighters, et cetera, et cetera, that’s so far in the future. But it’s not an unrealistic concept to think about some of the technology today to do that without having a human in the cockpit or in the captain’s chair, et cetera, et cetera.

 

Joel – 00:29:01: You know, when I think about there’s so many upsides to this, and we always have to think about both sides of it. The other thing that I think about from a cybersecurity perspective is an expansion of the attack service because every time you add an AI-enabled component, that can be attacked. And even if it is in line to deliver the lethal force or some critical mission thing, it can still be compromised and used to spy on the mission or track soldiers and troop movement. There’s a lot of bad things. And I don’t know that we figured it out fully. Well, actually, I know we haven’t figured out fully how to secure AI yet in this new attack service. So, how is that being balanced?

 

Aaron – 00:29:38: That’s a great question and a great point. So, you can use AI in multiple ways. So, if you think about what we call Network Defense, right? So you got your red teams or blue teams, right? You have teams that do your Penetration Testing. I know that’s near and dear to Max’s heart. You want to find where those vulnerabilities are. And then you got your blue team that’s defending against those and saying, Okay, here’s what we need to fix. Here’s what we need to shore up. Here’s how we can prevent those attacks. I mean, that’s commercial and military that goes both ways. AI can assist in those efforts. So whether we’re talking about protecting communication channels, whether we’re talking about protecting systems and networks and doing so at the speed of whatever with AI, as opposed to waiting for a human to intervene. There are many ways that we can use AI to protect our AI Research. I mean, the applications of it are pretty much endless. It’s just where are we going to spend our time? What are we going to prioritize first? Where are we going to invest our research dollars? And then the testing and deployment of this one thing I can share over my nearly 27 years in the military. And I know Max can say the same thing. Is that we have disparate technology across branches. We have disparate networks across branches. So even if the Navy develops something that works for them, it may not necessarily work for the Department of Air Force and the Department of the Army. That’s where I think we’ve got to get to a standard that, you know, bringing this full circle, Max references at the beginning. Does the DOD have a standard, and does the standard lead to Commander’s Intent? And what are those items that we have to accomplish? We’re not there yet. But those are some great questions you asked, Joel. I think those are things they’re thinking about or probably have already come up with some answers to. I’m just not privy to it. But it makes sense that they would be there already.

 

Max – 00:31:21: And I think that’s part of the challenge, is there’s so much unknown here in terms of how we understand traditional methods of attack and defense. But here, there’s all sorts of feeding in a false data set, inferencing, asking the question in a way that you get things that you shouldn’t be getting access to. Right. There are all sorts of implications. What the government typically does is just silo off everything, like compartmentalize. I think Scott was on here as the attorney a few days ago, where he was talking about how you have your base model, but then you add intellectual property to it, and then you train on that intellectual property. It’s actually very similar, where it’s all compartmentalized, and no information gets mixed up. And usually, the information would get declassified and downgraded. But this is where Artificial Intelligence can escape that. The traditional method because it can have inference. It can infer knowledge. Right. So I actually don’t know if we’ll ever see something like a large language model, Joel, across the Department of Defense, a single large language model across all the compartmentalized areas. I mean, that would be crazy if we saw that.

 

Aaron – 00:32:31: The only way something like that happens is if the NSA does it. But they’ll control it. It’ll be a cross. 

 

Max – 00:32:36: They’ll control it. 

 

Aaron – 00:32:37: Right. And then we have access to it. You’ve got to have that independent intelligence agency doing that. The departments themselves know they’re too decentralized. And to Max’s point, there are two compartmentalized. They’re sandboxes. They’re not going to let anything out of their little sandbox. So now, unless the NSA does it, I agree with you, Max. I don’t see it happening.

 

Joel – 00:32:57: Yeah. So it troubles me to hear this, but I understand all the right reasons. Right. It’s a tough spot to be in because of the sensitive nature of the information we’re talking about. But if you just described a private company, then I would pull all my money from that stock, everybody in the stock market. 

 

Max – 00:33:15: They’re not going to make it, exactly.

 

Joel – 00:33:16: So how do we overcome that? Is there a role that the private sector can use to bridge the gap in some way or offer, or how do we help in this situation?

 

Aaron – 00:33:25: That’s a great question. I mean, there’s still so many, I hate to say it’s stage gates, bureaucratic stage gates that have to be navigated. And I mean, I don’t want to take us down a rabbit trail on that, but it does start with collaboration on the front end with the commercial sector, whether it’s in research and development, whether it’s in the testing and application or the potential developing new use cases. This isn’t new to the military, right? When we’re talking aircraft, warships, tanks, you name it, any type of weapon system, those processes are there. Now, whether we’ve established those with AI because of the concerns we’ve just discussed, I don’t know, right? I’m expressing my opinion here. I’m not speaking on behalf of the Department of Navy, but I would think this is one where, honestly, you want to keep some things stovepipe until you have the proper channels and means of being able to share and collaborate safely. My biggest concern, it’s always been my biggest concern, is when we have the supply chain supporting the DOD and that information gets out, and what we’re developing for the next ship, the next aircraft, the next weapon system, and next thing you know, our near-peer adversaries have it before we do because they got in and they’ve got the information. So we got to be very, very careful. I’m just going to say this. We have great practices and processes, but I’m not sure that we always maintain them.

 

Max – 00:34:42: I think that’s the challenge, Joel; we got to figure out as a country, right? If we look at it as patriots, right? How do we work? Because when we think about the mission of our country, it’s not about, hey, I would pull out the money, because a lot of people that do work with the government, they got to have that in terms of their belief structure, because we somehow need to figure out a way. We just have to, and in my opinion, if we don’t, we’ll be so far behind it’ll be difficult to catch up. So that’s another area I don’t think we quite know, and the government certainly doesn’t know what it’s going to do because they’re so focused on laws first, capability first, use-cases first, which all things make sense, but they challenge the conventional thinking, because of how this capability is utilized, right? It naturally has to have a large data set. The more things feeding it, the better it is, right? So it’s very difficult.

 

Aaron – 00:35:37: You hit key points there, Max, right? The challenges are no different for the DOD, Federal Government, or the private sector: data management, digital infrastructure requirements, and talent management, which is huge.

 

Max – 00:35:49: Talent management is huge.

 

Aaron – 00:35:50: Talent management is huge, right? And then supporting the analytics of AI development. Those challenges are there for the DOD as well. Now, I mentioned earlier the project OpenShip. Now, the group that is tasked with that they’re really focused on integrating data sets. So, there has to be a means of collaboration to be able to get that information. And the Navy actually has a group that is tasked with tearing down the stovepipes, at least within the Navy, within the Navy itself. So, they recognize there are challenges. They recognize they need to get to the point that Max just described, and they recognize that this is not going to be done overnight, but it has to be done. So they put somebody in charge of it. They’ve got the authority and the responsibility to carry it out. So maybe in two to three years, we’re having a completely different conversation about AI and the Navy.

 

Joel – 00:36:39: I think that makes so much sense. And really, I reiterate I have the utmost respect for the challenges that the Navy, the Department of Defense, and the US Government have in regards to this because it’s a really tough situation. But I will say that when I look at a nation-state where the lines between public and private are blurred, the information can flow freely and usually flows toward the government, and they don’t have those silos because they operate with such Authoritarian Government. I would imagine it’s hard to compete against that. And the fact that we are where we are just shows the level of talent, I think, of the people that’s involved with this, to still do it with the freedoms that we have. So, I think it’s a pretty awesome state of affairs, I guess.

 

Aaron – 00:37:19: Well, you’re absolutely right. You throw ethics out, and anything is possible at the speed of whatever you want to test and do. Our near-peer adversaries, they’re totalitarian regimes, right? Honestly, from the top down, you’re just going to do whatever to accomplish the mission set; ethics aside, thank God we’re in the United States. I appreciate the ethics and the values, right? We do value human life. We do value the welfare and the prosperity of the American people, and not just the American people, our partners, but also those countries that we go to and we mandate, whether they’re allied with us or not. I think of the COVID response; we sent out the comfort and the mercy. We sent out emergency medical facility teams all over the globe. We didn’t care, right? You think of disaster relief in Haiti and other places. That’s what ethics does. Is it slower to develop? Sure, but I think the value that it brings back and contributes, not just to the American way of life but globally to other nations, is so much more effective.

 

Max – 00:38:20: And on that, and I completely, 100% agree with you, and Joel, I know we’ve had a lot of conversations with a lot of other experts. Somehow, I think it always points back to, hey, we need somebody who understands ethics, philosophy, things that typically fall outside of information assurance professionals, risk management professionals, and even AI experts because how do you codify ethics within Artificial Intelligence? I mean, we have it as our own human behavior, what we believe in, things like that. But I think that is a challenge to make it fair and equitable. The government will most certainly figure it out before anybody else because they have a vested interest in doing it. That’s how I view it, at least. And that’s going to take time. We got to distill that out, right, in terms of what that even means.

 

Aaron – 00:39:10: To Max’s point, there’s also the governance of that ethics. I can define the standard that says, this is what you ought to do. This is ethical AI. But then we’re going to go look at the morality of it. What’s being done? How is it being executed? There can be a huge gap between the ethics and the morality of day-to-day what we’re doing with AI. That’s where Max is spot on. You got to have somebody in that role, and then maybe it’s a team of somebody that’s governing to make sure that we’re not strained, of course, of our ethical standards.

 

Joel – 00:39:38: Absolutely. And I think that one of the things that I’ve been actually researching and working on is AI ethics as a model. So I believe that we need AI ethics in so many places, especially in a diverse environment like the Department of Defense; I think we’re going to have to build an AI model that codifies the ethics that we want to be embodied and then have it involved in real-time decision-making as a countermeasure, something like that. We’re going to have to automate it. And humans evolve, but that’s probably a different topic. But I think that’s going to be an important thing to develop.

 

Aaron – 00:40:11: No, I would concur. And as soon as you have humans, you have human bias. You have to work through all those issues. But with your premise, I agree with it. I think that makes sense.

 

Joel – 00:40:19: Oh, very good. I know we’re coming up on time here, and we’ve articulated a lot of great things and a lot of troubling things ahead that we’ve got to work through. So I’d love to sum it up and say, what do we think is the positive step forward? What are your thoughts on how do we get to the next steps? Aaron and Max, what do we have in front of us? What are the next steps you think that we need as a country, as a Department of Defense, or whatever to make this a reality?

 

Aaron – 00:40:44: We’ve covered this, right? You first have to define your ethical guidelines to operate under. And once you have those, then you can start defining what are my use cases for AI, which then directs your activities or your actions. And then we have to break down the stovepipes so that we’re not having multiple different Research Labs working on the same things. And the Navy’s actually gotten through that. They actually have a group that’s headed up by an active-duty captain. And essentially, the intent for them is to break down those stovepipes. I think that’s going to speed up development and adoption. It’s also going to centralize feedback information, centralize also the central point for what are we going to do now? What are we doing next? What are all these requirements coming to us from the fleet? What do we need from AI? Like I gave you five or six good examples of how the Navy’s using AI. Those weren’t developed in a vacuum. That came from input probably from multiple different groups within the Navy itself saying, we need this, we need this, we need this. To me, that’s how it’s going to continue. You gotta have those working groups. You have to have the right leadership in place. Got to make sure that you’re partnering with the commercial sector, right? Don’t lose sight of the great work they’re doing just because you’re afraid of what might get out. You’ve got to work through those processes. And then you got to be transparent, right? Where does the funding come from? You got to work for DC and you got to make sure that you’ve allied with them on what is your vision and your mission. To me, that is really the framework or the structure that has to be in place to be successful with AI in the Navy, probably in the DOD altogether. Max, I welcome your thoughts.

 

Max – 00:42:22: Yeah, well, Aaron, I just want to do as we come to an end. We just wanted to thank you for this, but I 100% agree with you on a lot of those things. From my point of view, right? Being a prior guy and now being on the outside, man, I just wish for more transparency. So if let’s say some small innovative organization comes up with something, it’s just not a veil of, well, where do I go? How do I get started? I think our country needs to somehow figure that out because I think a lot of these things are going to be commercially built because of the power of the computer and the availability of the computer to everybody. It’s not a problem of AI; it’s a legislative problem. It’s legal, right? So that’s what I’m hoping for. I’m hoping for those types of changes that are a catalyst for adopting faster because a lot of this is going to come out of the commercial side of the house. But Aaron, with that, I wanted to thank you. And I think this was a fantastic topic for those who are listening in and have never worked with the government and what we’re up against.

 

Aaron – 00:43:23: Agreed, yes. I just want to say thank you to both you and Joel for having me on. I’ve enjoyed your previous podcasts look forward to future ones. 

 

Joel – 00:43:30: Well, it’s been a great show, great time. And I think we should check in with you in the near future to see how we’re doing on this journey because it’s going to evolve fast, or at least we hope it will.

 

Aaron – 00:43:41: I would like that, thank you. Appreciate it.

 

Max – 00:43:45: Emerging Cyber Risk is brought to you by Ignyte and Secure Robotics. To find out more about Ignite and Secure Robotics, visit ignyteplatform.com or securerobotics.ai.

 

Joel – 00:43:56: Make sure to search for Cyber in Apple Podcasts, Spotify, and Google Podcasts, or anywhere else podcasts are found. And make sure to click Subscribe so you don’t miss any future episodes. On behalf of the team here at Ignyte and Secure Robotics, thanks for listening.

 

Ignyte Platform becomes a third-party assessment organization (3PAO), now listed on the FedRAMP Marketplace - Read More

X