👉 Trusting machines in critical missions
👉 Public discussions on the ethical considerations of AI
👉 Whether AI adoption can, or even should, be avoided
👉 Trusting machines in critical missions
👉 Public discussions on the ethical considerations of AI
👉 Whether AI adoption can, or even should, be avoided
On this episode of the Emerging Cyber Risk podcast, our guest is Ron Fehlen, VP and GM of USAF Programs and Broadband Communication Systems at L3Harris Technologies, the trusted disruptor for the global aerospace and defense industry. The podcast is brought to you by Ignyte and Secure Robotics, where we share our expertise on cyber risk and AI to help you prepare for the risk management of emerging technologies. We are your hosts, Max Aulakh and Joel Yonts.
Join us as we discuss the adoption of artificial intelligence, including both the negatives and the upsides. Discover the opportunities for AI adoption in the defense industry and whether it is likely that machines can ever be truly trusted for critical missions. Explore the importance of public discussions around the ethical implications of AI adoption and the use of synthetic data for training larger models.
The touchpoints of our discussion include:
Ron Fehlen Bio:
Ron Fehlen is the VP and GM of USAF Programs and Broadband Communication Systems at L3Harris Technologies, the trusted disruptor for the global aerospace and defense industry. Prior to this he worked at Raytheon Technologies as Executive Director of ISR & Comm at Raytheon Intelligence & Space. He also previously served as the Deputy Director, Advanced Space Capabilities Directorate, Air Force Rapid Capabilities Office at United States Air Force where he oversaw $21 billion plus of classified technology programs and over ten high performance teams at the premier USAF acquisition organization.
Ron Fehlen on LinkedIn
Get to Know Your Hosts:
Max Aulakh Bio:
Max is the CEO of Ignyte Assurance Platform and a data security and compliance leader delivering DoD-tested security strategies and compliance that safeguard mission-critical IT operations. He has trained and excelled while working for the United States Air Force. He maintained and tested the InfoSec and ComSec functions of network hardware, software, and IT infrastructure for global unclassified and classified networks.
Max Aulakh on LinkedIn
Ignyte Assurance Platform Website
Joel Yonts Bio:
Joel is CEO & Research Scientist at Secure Robotics and the Chief Research Officer & Strategist at Malicious Streams. Joel is a security strategist, innovator, advisor, and seasoned security executive with a passion for information security research. He has over twenty-five years of diverse information technology experience with an emphasis on cyber security. Joel is also an accomplished speaker, writer, and software developer with research interests in enterprise security, digital forensics, artificial intelligence, and robotic & IoT systems.
Joel Yonts on LinkedIn
Secure Robotics Website
Malicious Streams Website
Max – 00:00:00: Welcome to Emerging Cyber Risk, a podcast by Ignyte and Secure Robotics. We share our expertise on Cyber Risk and artificial intelligence to help you prepare for risk management of emerging technologies. We’re your hosts, Max Aulakh
Joel – 00:00:18: and Joel Yonts. Join us as we dive into the development of AI, the evolution of cybersecurity, and other topics driving change in the cyber risk outlook. Welcome to another episode of the Emerging Cyber Risk podcast. I’m your host, Joel Yonts. And as always, Max Aulakh is with me. Max, I think we’ve got a great show today. We’ve got an industry leader in defense industrial base with a long, interesting background. So Max, if you want to kick us off. Yeah, thank you, Joel. And for everyone that’s listening out there, we’ve had a couple of different folks from the military. But today I want to introduce in my friend Ron. And we’re going to be discussing, As always, artificial intelligence. We want to talk about adoption. What are some of the negative impacts of artificial intelligence but also some upside to artificial intelligence. Before we go into this, Ron, why don’t you tell us a little bit about your background, sir, how you kind of got to where you’re at, and share a little bit about what you’re doing at L3.
Ron – 00:00:46: I appreciate it, Max, and appreciate Joel for the opportunity. It is an absolutely exciting time. But I’ll go back just a little bit in history. From my background, I joined the Air Force straight out of high school. And from there, went into what we call operations. I was flying on an aircraft that did airborne sensing, big radar in the sky, if you will. And we tracked aircraft during operations. And it allowed me to sort of look over the shoulder of people that on a moment by moment basis were making decisions within a battlefield environment. And it was very interesting to be a part of that. But following that period of time, the Air Force sent me off to get a couple degrees, both of them in engineering with electrical engineering focus. The final one was in electromagnetics, some things that are generally related only to military systems. But had a really good time there. Learned a lot about how we use data, how we do analysis, as well as what that means for some of the physical world things that we do. Spent a great deal of time in acquisitions, so buying things for the Air Force as an acquisition officer, about 15 years or so doing that. And spent a good deal of time in a number of classified arenas, things of that nature. But last six years was a lot of fun and an interesting time in history within the Department of Defense. I was in the Pentagon and worked on the Secretary of the Air Force’s staff in acquisitions, which basically meant I had responsibility to then Dr. Wilson for a part of the Air Force budget, explaining to Congress what we were going to use it for, making sure that internally we guided and directed the strategy for that funding and actually executed it. And then the last three years, I was in a small organization called the Rapid Capabilities Office. Where there I had the space programs. I got to be involved. For example, X -37, there’s some press releases out there with my name on it. We were able to launch what is essentially an unmanned space shuttle off of Falcon 9 and recover it down at Kennedy Space Center. So, really an interesting time from that perspective. But also, I’ll tell you, during that period of time was when we were starting the conversation about artificial intelligence and its applications and not just commercial, but also defense applications. And so a lot of the things coming about today are now sort of the next phase of that initial discussion on what does this mean? What do we do with it? And just sort of learning about how to implement it in various programs and as a capability. From there and about the 2019 time frame, retired from the Air Force, went to work at Raytheon and was responsible for satellites, payloads, and things of that nature before moving over here to L3 Harris. And what’s really a lot of fun, I really enjoy the opportunity at L3 Harris here as I’m a general manager for all the Air Force and Space Force business associated with the broadband communications sector. Long, fancy title that basically says, I get to work with the Air Force and Space Force to provide connectivity for them. And if you think about some of the applications for AI, they’re all over the place. But one of them is, how do I maintain connectivity between folks that need to talk? Essentially, I know it may seem like a small thing for us who carry cell phones around. We may get frustrated as we’re driving to work and we see a gap in our cell phone and we think, I lost a call for 30 seconds. Now, position yourself in an aircraft, position yourself as a soldier. And you start to think about what the loss of connectivity does to them. Then you can also go back to a lot of times, they don’t have time to figure it out on their own because of the things going on around them. They’ve had to previously, but now with artificial intelligence, does that open up a use case? Artificial intelligence is enabling the warfighter, the soldier, whatever the case may be, just in the same way that it enables our personal lives. You know, I’m looking forward to this conversation because it is that unique. What are the differences between, I’ll say, the commercial more, what we’re used to applications of AI? How does that differ at all from, I’ll say, a department defense use case? There’s some cases where it differs greatly and there’s some where it’s actually sort of the same. So it’s really interesting, but it brings out different characteristics as you’re having the conversation. So again, appreciate the time with you today.
Joel – 00:05:18: That’s fascinating. I mean, fascinating background. And when you talk about communications and adoption, the AI can only work if it can get data. And so sensory data, remote sensor data, going into an AI model or AI model, talking back to a larger model, that’s critical. So, I imagine that’s gonna make it even more important a lot of the work you’re doing.
Ron – 00:05:38 Yeah, 100%. You know, it’s interesting. You start just with, okay, what’s the challenge for adoption, whether it be a defense market or otherwise, the number one thing is the density of data. If I don’t have the training models available to me, if I’m training, for example, in a machine learning environment or reinforcement learning where I’m actually feeding it data over time, well, some of those use cases, you’re actually hoping you never get into, but you wanna make sure that it works when you get in there. So how do you generate a data set or deal with real data of sufficient density that you can actually have confidence in the output that it’s giving you?
Joel – 00:06:11: Ron, yeah, thank you very much for your background. I love it that you’re a prior Air Force, myself being a prior Air Force. So some of the things you talked about acquisition, briefing the Congress, which is fascinating in itself, rapid capabilities development, right? We’ve always had those outfits. So thank you for your service. I just wanted to say that, Ron, and we appreciate you so much. But I think going back to your point in terms of how do we feed this monster that requires a lot of data? Have you seen a lot of use of synthetic data to kind of fill that gap, to try to train different models? And how do we even build synthetic data for where, it may or may not be sufficient when it comes to training a large data set that you have to do it on the non -public side, right? Where the military may pull information in, but they still need realistic information that’s not available to the military.
Ron – 00:07:00: Yeah, no, it’s a good question. And there have been, I’ll say, opportunities to look at synthetic data. But I think a lot of times what’s interesting is setting up those use cases to basically use the synthetic data to understand what the AI is deciding on. When any of us hire or go into a job interview, part of what people are looking for us is our judgment. What data do we need to make decisions? How do we make those decisions and so forth? So, the synthetic sort of answers the, I’m gonna give you a set of data. But the second part of trying to really get to the AI’s judgment, using a human word to describe it, how does it come up with its answer? And so there is that part of it, getting synthetic use cases enable you, because you can do quarter cases and things of that nature. You can look at the matrix and how it’s coming up to its solution set. But because sometimes it’s not deterministic, you get that sense of, okay, I wanna have trust in your quote unquote judgment, like I would sitting across the table from somebody that’s making decisions for a business or making decisions in the commercial world. And so, yeah, there is absolutely some use of synthetic data because you just don’t have the density or it may not be data that you readily have available. But fundamentally you’re trying to tease out, how is it making its decision? I’ve heard the analogy of, if I give you 300 pictures of cats and I asked you to pick out the Siamese cat, well, did you pick it out because it was Siamese? Or did you pick it out because the data I gave you, the Siamese cat was all sort of in the upper right -hand quadrant. And so now it’s a data issue and you can sort of fair it out now, how do I make sure as I curate the data as it goes in, that it is actually gonna produce a level of judgment on the AI side that produces an answer I can have confidence in.
Joel – 00:08:42: I love it. And Max, I think going to the synthetic is naturally where I went as well. That makes a lot of sense to map the enterprise of what’s possible. I know you’ve done some rapid stuff in your day. I looked over your tenure dealing with some rapidly evolving situations. So what happens, you get technology deployed out there and your synthetic data has some significant gaps that you need to resolve right now because you have war fighters in situations. How do you build that resiliency to adapt that fast in that diverse of an environment?
Ron– 00:09:11 It is an absolutely great question. I’ll cage it in, imagine the first time a fighter goes out and that’s completely controlled by artificial intelligence. And the fighter pilot takes off in his aircraft. This other airplane takes off at the same time. There’s videos all over the internet about this. And in fact, even DARPA did it. They called it Alpha Dog Fighting, right? Where they use AI to go against man on human, if you will, from a fighter perspective. But imagine that first time it happens in real space, trained well ahead of time. But imagine that and all of a sudden the AI isn’t quite making the decisions you thought of because of the training data and you may not find out till much later. And so it’s almost as if until you build that confidence, you’re gonna want that switch that takes it down a notch. I think of it in the sense of we want to get to somebody that has a graduate degree in whatever skillset we’re employing. In this case, flying an airplane in a combat bar, right? But you may want the switch that throttles them back to a high school level. That basically says, hey, just follow me because I’ve all of a sudden lost trust or confidence that the input you’re receiving is producing what I’m used to seeing. It’s the path to adoption. While there may be some point, maybe decades away, where that switch will be taken off. But it’s like when I think about an electric car that’s got artificial intelligence that’s driving along the road. But yeah, what do I have to do? I have to keep my hand on the wheel. Because the idea is it’s not close enough yet that I can trust it in all circumstances. It’s always going to behave the way that a human judgment would. So I need some failsafe, I’ll say, that keeps, in this case, the car in motion. But I can take over operations immediately for safety purposes or otherwise. So I liken it a little bit to that, to your point of, I don’t think there’s a way necessarily to rapidly update the training data or otherwise. Because you’ll have to understand what is it that caused that. Now, I do believe that we can close the loop on that a lot quicker in the future of, hey, this is the data I received. Give me a data set, run it, number crunch it, maybe a day or two later I can make an update to it. But until we get to that level of responsiveness, I’ll say, in the infrastructure, the enterprise, that’s going to be required to support that, you’re going to have to have a throttle switch that takes you back. I hate to say it, but it sort of dumbs down the artificial intelligence or restricts some of what it’s able to do and allows the human operator to then take more control for a period of time.
Joel – 00:11:31: Ron, I don’t know how you feel about this. But I feel like when it comes to these critical missions, regardless of the equipment being used, if it’s a mission, will we ever trust a machine? We might know how it makes the decision, how it automates, completely stripping the benefit of even automation, forget AI. But have you seen it in your career where for a critical mission, where we’re actually allowing the machine to do some things, where the human is taken out of the loop?
Ron – 00:11:59: There are some, not passive, but I’ll say critical, and that’s where it really boils down to AI as an enabler of efficiency. If the goal is just to make this process more efficient, maybe it’s aircraft planning or something like that, then it’s critical, but it doesn’t have the likelihood of such a high cost output, something gets destroyed or otherwise because of it. And there’s time, I’ll say, to be able to double check the work. If you’re planning out tomorrow’s flight plan, then I might be able to double check the work. So, there are those pieces. I will say the other side of it, and I know it’s been in the news. I think the Secretary of the Air Force and some others have talked about it. There are situations today where we trust the individual in the cockpit, but they don’t have the full authority to do whatever they want. We have rules of engagement. In the case of the military, we have a JIVA convention. There are sometimes lawyers in the loop to make sure that we’re doing the right thing. So there’s those portions of it where I see artificial intelligence. There are some critical actions that they will likely never be able to take because even in the human realm, we don’t always take those without some backstops. But they will very likely be recommending courses of action. And there it goes from one is efficiency, but the other is supporting decision making rather than doing the decision itself. If that makes sense.
Joel – 00:13:10: Yeah, there’s the only non -military guy in the room. I’m going to do the devil’s advocate thing and just say it stands to reason in my mind. All that sounds great until your enemy on the battlefield decides that they have a different barometer and they employ it at the risk of their own people, but they have superior capabilities because of it. So, that seems to be another variable that may change that somewhat.
Ron – 00:13:38: Yeah, 100%. And I think that’s where you take out the AI aspect of it completely. You back up maybe 10 years and how we employ forces. It goes back to how do I get AI as an enabler even in a changing environment? To your point, we could define a set of data and use cases today, but we like to say the people we go up against, they get a vote. And so you have to figure out the balance between and that’s why I think the support of decision making actually makes a lot of sense because it’s helping people sit through data. Of course, the corollary of that is I need to be able to trust it. I was actually Joel, with your cybersecurity background, I was thinking this is sort of the entry point of what happens if somebody starts messing with the data or the underlying pieces of it. And that’s why I go back to the unique thing about operations, at least from a military perspective, is we do a lot of training in the hopes that we never have to use that. And so from that perspective, it builds up within constrained environments, sort of builds up an understanding of the tools. And you start to trust them such that you would start to believe or at least want to as you start using it in real operations, if somebody’s messed with it, you might get a sense for it. And I liken it to this. I’ve been told that in the Treasury Department, when they’re trying to teach people about what’s a counterfeit bill versus a real bill, they give them lots of dollar bills, and then they give them one counterfeit. So you get used to what it feels like from a realness perspective, so that you know when it’s false or when there’s something just not right. And So, I talked in the beginning about building in that judgment from the AI and understanding it. But it’s also building in the judgment inside the human that’s being enabled by AI to say, I don’t think that’s right. You and I, all of us use some level of AI every time we get on the road, and we use whatever maps app that we like to use. And we’ve all seen it. We’re like, we’re looking at the route that it planned out for us. We’re thinking to ourselves, you know, I got it. But it’s four o ‘clock on a Friday, and I know I’m going to hit that point 20 minutes from now. And I know that it usually slows down there. So unless I’m over in the left -hand lane or if there’s an accident, it’s not going to happen. So I’ll route around and save myself some time. We have essentially taken out the judgment of the AI and said, I understand what you’re doing. 90 % of the time, it’s helpful. But I also understand the operational environment, in this case, the highway better than you do. And so I’m going to make this judgment to go over here, make a small modification to what they’re recommending.
Max – 00:16:02: It’ll be very interesting when we do start to see our, I guess, adversaries use it in a way that we typically wouldn’t because of our standards, our beliefs and all of these things, law, war, conflict, Geneva Convention, all these things we put in place. And some people would simply not follow those. Right. That’s a different kind of conversation. I think it escapes AI altogether. But it is interesting because we’re starting to see countries like China put in policy. This AI thing should not subvert the state. Right. They’re doing that very intentionally. But yeah, a very interesting future in terms of where we’re going. Now, in terms of just securing the artificial intelligence, you know, you being a cybersecurity leader that’s putting out some content for this in terms of the government’s ability to secure this capability, bringing it from the commercial into the side where the government can use it. Where do you see that going, Ron? Are there going to be some relaxed rules or is it even going to be harder? Because typically the government doesn’t take commercial stuff well into the other side. Whatever the other side might be, even if it’s unclassified, down to nothing. Right. What are your thoughts on that?
Ron – 00:17:09: You know, it’s very interesting. And you’re correct. I think about it in a DoD mindset, how you frame the question. But I have to admit, when I look at it, let’s just take chat GPT as an example. Right. If you look at that as an example of what the future might look like and some of the concerns, whether from an ethics perspective, the data that was used and so forth. And then, I think I read about a couple of lawyers that decided to use chat GPT to put together a briefing for the court. Right. And so it goes down to the usage of it as well. So there are those interesting pieces that I think the DoD is just one aspect of society that is coming to grips with it. Chat GPT as an example, it gives a more public discussion on ethics. Whereas in the Department of Defense, whether it be classification or whether it be just sensitivity to how we’re thinking about using it, it may hinder and sort of closed door a lot of those discussions. And this is where I actually think the public debate and discussion along the ethics of artificial intelligence are going to help inform, in many cases, the Department of Defense, because it just broadens the perspective on it. But yet it’s still some of the same criticality. I mean, it’s the ethical discussion I’ve heard from a public perspective of an electric vehicle that has AI in it. It’s coming up to an intersection and it’s got to make a decision on, you know, it’s not going to slow down in time and now it’s going to hit one or two things. It’s going to hurt the passenger or it’s going to hurt the person walking on the side of the street. How does it make those decisions? I think having an environment of basis for having those discussions as a culture, as a society, helps sort of inform and work through some of the harder problems, I’ll say, before it gets to, OK, now how would you use the Department of Defense application? Too many folks absent that public discussion and a personal understanding and potential impact. People often go to, you know, terminate our movies. It’s going to take over the world. I understand that it was a great movie. But in reality, as we start to see it impact our personal lives, it really prepares us better for the discussion than we have to have in some of these very critical areas.
Joel – 00:19:12: You know, I hear when you’re talking about that, it makes so much sense. And I like that logical approach. One of the things that I’m working with a lot of my corporate customers, and I can only think of it being magnified greatly when you’re looking at the defense industrial base, is AI is gonna seep in around the cracks, because every piece of software is gonna have AI in it. It’s not like it’s gonna suddenly be labeled as, oh, this is an AI. No, Photoshop has generative AI built into it now. It’s a feature. You just got it when you got the update. Natural language processing is a thing. I mean, it’s going to be so pervasive, as well as those organizations are gonna be using. So, I don’t know how you keep it out purely.
Ron – 00:19:51: Yeah, I would agree with you, Joel. And so it’s like saying, you know, we’re not gonna use combustion engines. I know everybody else does, but we’re not gonna use combustion engines, right? It’s a fact of the reality of technology that you need to figure out less of fear of it, but more of a, how do I use it? And in some cases, frankly, control the use of it to make sure it’s used in an ethical manner, because at the end of the day, there’s nobody that can say, well, the AI did it. That’s not the way this is gonna work. I agree with you from that perspective. True, it’s gonna be pervasive. You know, the Department of Defense just sort of likes to either say, well, it’s on the inside or it’s on the outside, right, I’m gonna deal with it or I’m not gonna deal with it. And I think the fact that it’s showing up in more commercial products, it helps us to think a little bit more about potential vulnerabilities because of it, as well as the uses of it and how to control and restrict that. That’s a little bit like the internet too. I use the combustion engine, but the internet’s the same thing. It’s developed by the government. Now it’s out there. Everybody’s using it. It’s got good purposes. It’s got bad purposes, bad actors, good actors, I should say. And so it’s really operating with that environment and understanding that there are going to be some bad actors when it comes to artificial intelligence, and there are going to be some good actors. And I don’t know that we have a way to figure out the two and go back to the intent of the user or application. I don’t know that we have a way to sit through those.
Joel – 00:21:10: One of the threads that I wanted to pick up on this, because I think what you said just makes logical sense. We’ve got to figure it out. But this is true across the board. There’s a lot of things we’ve got to figure out when the time is right. So I hear once we get to autonomy, we get to all these. But one of the things that’s on my mind is the time scale is not linear. We can’t just look and say, oh, well, right now we’re pretty immature and it’s been five years to get here. It may be here in four months, because what we’re seeing is certain things are accelerating very quickly. And the lead time to resolve these things from a policy standpoint or procedural or technology is going to be years to fix. So is there an awareness that even though it seems far off, it may be here by next year? And are we trying to build the future now or are we waiting till we get cleaned by it?
Ron – 00:21:57: Yeah, no, it’s a great question. I think while it’s becoming more prominent, and I agree with you that time scales always seem to be moving faster, either that or I’m just getting older. But if you look back in the Department of Defense history as an example, right, they stood up an artificial intelligence center about, wow, I think it’s been five to six years now. And so while not a lot has been broadcast about what they’ve done and so forth, I know that they at least put intent behind it, people behind it to start figuring some of those things out. While it hasn’t been as public, I’ll say, for lots of different reasons, I’ll say the same on the artificial intelligence within our normal lives. In the last four to five years that we’ve talked about it, how many people have been wanting to buy the electric car that’s gonna drive them to work and they can read a book or whatever the case may be. And we’re slowly, and we’re getting there for sure, particularly in the last couple of years. So I think the idea that it’s rising in prominence, it has been within the service, the Department of Defense, and it has been on the outside of the Department of Defense. And now that we’re getting to sort of this nexus point where there’s a lot more from a public domain perspective, not just on the, this is a really cool thing, but also, oh my gosh, these are the things it can do. How do we work through those from an ethics standpoint? I actually think the Department of Defense is decently well positioned from that perspective on having some people in the right spot and having done some initial work to really understand what could be done to the point where they can join sort of that public discussion. Some of the mission use cases and staying away from those, but there are business processes within the Department of Defense that could really benefit from artificial intelligence. And a great one is the amount of funding that goes into research and development, small activities, whether it be with small businesses or otherwise, that are spread throughout the services, but often the funders find it very difficult to understand what’s the return on investment. How is this making an impact on the mission? How do I connect those things, those small, I’ll say research and development type activities to large programs to make a difference, just because there’s so much out there and so much funding. So I think those are some examples of where it’s not classified, it’s not operationally sensitive, it’s not anything, but you can start to apply some level of artificial intelligence for just connecting humans to advance innovation. So, you can sort of start working to that because even that, okay, but what if the AI sends these small businesses always to this one company or things like that, you can still have that ethics conversation, but it’s in a different operational context that relieves some of the pressure, I’ll say, while we’re figuring out the other pieces at the same time.
Max – 00:24:30: Now, Joel, the other thing I’ve kind of picked up, and we’ve had a few different folks from the government side and with experience with the military, to some degree, and you guys can correct me if I’m thinking about this the wrong way, the government is waiting to be informed by the public. Like, what’s going on? What are you doing, industry? And industry is like, well, we built this crazy thing, what’s permissible? And they don’t know, so that’s why I think you see like, oh, well, the industry’s saying we need to regulate this thing because they know its capabilities, and the government is waiting to be informed on those capabilities. But I think Joel, you went to one of these conferences too, it’s the JAIC, right? JAIC, I think that’s the Joint Artificial Intelligence Center. I haven’t seen anything around ethics, Ron, you touch on this ethics issue, and I think that’s really important because almost everybody we talk to thinks about it strategically, it is about deciding right from wrong and who is the arbiter of truth. Have you seen anything like that within any community on figuring out that framework for what this needs to look like?
Ron – 00:25:31: Yeah, I’ve seen the use cases and we’ve talked to a couple of them here. I haven’t seen the framework, and it may be out there and I just haven’t been exposed to it. And you know, it’s interesting, I remember having a conversation years ago, this is about five, six years ago, and was talking about how would you train artificial intelligence to drive something, say it’s an airplane. And a very stupid individual made the comment to me, well, why don’t you train them any other way than you train a human? Just have them go through the same training. Now, obviously the methods, the data, the things that may be different to the AI, may be able to scan the book you’re reading and discussing and so forth to understand context. But I think there’s something to be said there: can you create that framework that says whether it’s permissible, whether it’s ethical, and be able to describe that and then push the artificial intelligence through it. It makes sense at a top level, but it’s implementation. Okay, so how do I get that down to analysis or scripts or use cases or whatever? And I think the car example that I used previously on, you’re either gonna hurt the person driving the car or you’re gonna accidentally hit somebody on the street because you don’t have enough time to stop to avoid either one. That’s sort of a framework that at least would tell you what decision would the artificial intelligence make. But that doesn’t at the end of it decide the most important question of is it self -sacrifice to the individual inside the car and they sign up for that when they start driving something that’s being guided by artificial intelligence, or is it protected at all costs, the human inside the vehicle and therefore somebody’s gonna be at risk outside. And now take that for example, how would you decide? So you and I would decide more than likely, I know my inclination is I would take the hit myself. I have airbags around me, I took the risk by doing this, that person walking down the street didn’t. So you’d be able to at least walk through a use case that says this is what I know a human answer would be. Now, does the artificial intelligence decide the same thing or not? You and I both know that’s very sort of experimental, it’s almost the high school way of doing it to just let’s try it, let’s define something and try it. Rather than the let’s step back and say, what does it look like from an ethical framework perspective, but at least you get started on the conversation that way.
Joel – 00:27:38: Absolutely.
Ron – 00:27:39: I think it will emerge. We’re waiting because you have organizations like Tesla and others experimenting with self -driving cars. That’s what they’re dealing with right now. So to some degree, it’s almost like, man, we got to wait for some people to die, but hopefully it’s not that. It’s not that. What is that rationale that’s being baked into some of these probabilistic elements because it’s not deterministic. Things can change all the time. So it is being coded. We just don’t know how yet. And I think that’s kind of what the government is waiting for. Hopefully the industry will share to some degree. And if they don’t, then we’ll have to wait till something catastrophic happens.
Joel – 00:28:19: Yeah. Okay. I’m going to put my AI engineering hat on here for a moment. One of the things that we talk about when we talk about ethics all the time, when we talk about AI all the time, we talk about it’s like one monolithic model, right? In reality, there’s tons of models. Any solution. And what I think the solution is going to be, a version of it, is you need an AI ethics model for your universe, like a self -driving car. So you build a model. It’s not the model that drives the car. It’s a different model that’s built and uses synthetic data to synthesize your different use cases. And you train it on a core set of values. But that model is allowed to adapt to the environment. And it is given supervision over the driving module. So we’re not baking it in. I just kind of want to throw that out for the broader audience in the discussion. Ethics is a separate AI component, but it relates and it connects in with the systems and informs because it’s a hierarchy. So, anyway, throwing that out.
Max – 00:29:18: Yeah. I think there is some level of decoupling there. I know, Ron, in the defense industry, we’re going to see a lot of model validators, I’ll just say. A bunch of companies are just doing model validation. On the public side of the house or the private companies, I could see somebody just selling ethics models altogether, right? Where, hey, we can tell you if what you’re doing is ethical or not ethical. Even something simple as a student trying to get their homework done through AI, which we’ve seen. Right? Why do I need to learn how to write an essay when a computer can do it? Right. So, I can see an entire industry coming out of this as time goes on in terms of just the different kinds of models that can be layered in, in order to get a good decision support system going.
Ron – 00:30:03: Yeah, it’s sort of interesting, too, because I go back to the science fiction movies I’ve watched in the past, right, that you and I have all probably watched. And we talk about prime directives, things that are just, they cannot change. Right? And how then to take those and implement them. The engineers probably were the ones thinking, okay, so if I give this piece of metal with a bunch of software in it, prime directives, how do I know that those are actually implemented? And what happens when the prime directives come up against one another, whatever those might be? I know I’m using the movie terminology, and we’ve seen some of that. Hollywood’s been good at sort of giving examples of where this or where they start to compete. The prime directives don’t make sense individually and so forth. But being able to operationalize that, to your point, Joel, and more of a model that is also growing itself, you might argue, it goes back to what you said a moment ago, that while the pace of technology and what we can do is moving very quickly, I picture that ethics module really is the person sitting behind a desk, moving paper slowly, making sure, yeah, no, I know you want to go that fast, but here’s how we’re going to do it. And so that you have that counterbalance to the speed of technology, counterbalance with something that is looking on from either a historical perspective or combined with an ethical perspective to say, oh, yeah, no, no, but heading this direction, or no, you’ve got to stop and let’s talk about that before and make sure that we update our ethics model, if you will, before it’s permissible, as you said, Max.
Joel – 00:31:25: Well, Ron, I know we’re almost up on time, so I wanted to thank you for coming on the show, but before we let you go, I did want to ask this. When it comes to L3 Harris, what are some of the innovation stuff that you’re personally working on within your group that the Department of Defense or the industry can be excited about?
Ron – 00:31:43: So, there’s a number of things. I’ll share at least an area that’s near to me. When I flew back in the 1990s, we had various positions on the aircraft that were responsible for changing settings on machines. And I’m not trying to minimize it, but we’d have a communication systems officer. Their job was to switch to the next frequency because one wasn’t working or, you know, load up this next one. When you think about that type of position and you think about what that person could be doing for the service, you start to realize that, man, if I can put some amount of machine that’s able to understand its environment, just like any other artificial intelligence, and then be able to respond to that environment. You’re playing with walkie talkies and you have to play with the squelch all the time and you have to get it just right and you have to adjust it all the time. Well imagine if you never had to worry about that, which is sort of commercial electronics, right? It’ll figure out what that setting needs to be. Well, a similar thing. Why wouldn’t we progress that to the next level? Again, it’s about if it’s something, I won’t want to say innocuous, it’s very important, very critical, but yet the impact of a small error here and there isn’t like life threatening potentially, then maybe an application of artificial intelligence to that, then enable, like we were talking about previously, how do you enable that operator to make decisions? And maybe, rather than having to flip the knobs themselves, the AI is able to say, hey, this is what’s going on and recommend you move over here to this frequency or to this system, whatever the case may be, in order to maintain communications. It seems like that would be a really good thing to have. I don’t know about you, but I remember the first time I used Maps and it gave me one route to take. One route, like this is, take it, that’s the only way it is. Now when I look at it, and then it migrated to, well, I’m going to give you three routes, but then you’re going to have to decide on one and it’s never going to change. And then it went to, I’m driving along and it says, you know what, I think things have slowed down ahead. If you take this right over to the right, if you want to, then you’ll have the same ETA. And so there was that progression of implementation that went from initial, you only have one, to I’m going to give you some options, but then you have to choose to, hey, while you’re driving, I’m going to update it over time. Now apply that to a communication system as you’re flying through and whether it’s a cell tower or whether it’s a satellite you’re trying to connect to or somebody else. And all that connectivity comes available to you. Well, it’d be a lot easier to have somebody manage that for you or to have some artificial intelligence manage that for you and suggest changeovers or whatever the case may be, rather than you try to figure it out on your own and going back to the speed of operations is sometimes quicker than we can respond.
Max – 00:34:16: I know that sounds like a common task, Joel, on the non -government side, but on the government side, man, Ron, you’re taking me back to like my security forces, ComSac days.
Ron – 00:34:27: That’s exactly what it is. Right. And look for some out of band communication that ain’t available to figure out what is the next frequency we should be talking on. So I’m pretty excited about some of the stuff you guys are putting out there to support our military.
Joel – 00:34:40: That’s awesome. I’m looking forward to hearing a lot of success out of this, and I’m sure you’ll just build on it and continue to grow.
Ron – 00:34:46: I appreciate Joel and Max. Again, appreciate the time with y ‘all. It’s an important topic. Glad to see we’re not talking about something that’s going to happen five, 10 years from now, but in some cases like what happened last month or a year ago. And we’re sort of working through the now, what should we do about this? I think we’re in a good place from that perspective. And I appreciate you bringing us to light through your podcast.
Joel – 00:35:05: Yeah, absolutely. Again, we just wanted to thank you for coming on the show.
Ron – 00:35:09 It was a fun time. Thank you, guys.
Max – 00:35:14: Emerging Cyber Risk is brought to you by Ignyte and Secure Robotics. To find out more about Ignyte and Secure Robotics, visit ignyteplatform.com or securerobotics.ai.
Joel – 00:35:25: Make sure to search for Cyber in Apple Podcasts, Spotify, and Google Podcasts, or anywhere else podcasts are found. And make sure to click Subscribe so you don’t miss any future episodes. On behalf of the team here at Ignyte and Secure Robotics, thanks for listening.