‹ All episodes

Reckless Compliance

Use of Artificial Intelligence for NIST Controls Responses – Perspective from Air Force ISSM


Max Aulakh and Uliya Sparks, an ISSM at SAF Mission Partners Environment, discuss the potential of AI in federal compliance. They explore ISSMs’ challenges, including managing multiple systems and navigating complex policies like NIST and FedRAMP. Uliya highlights the slow adoption of AI due to concerns about data sensitivity and job displacement, stressing the need for human expertise in validating AI-generated responses.

Topics we discuss:

  • Artificial Intelligence in context of Control Responses
  • Tool limitations and how we as humans can address them
  • Bringing awareness of our work to a younger generation


Max Aulakh Bio:

Max is the CEO of Ignyte Assurance Platform and a Data Security and Compliance leader delivering DoD-tested security strategies and compliance that safeguard mission-critical IT operations. He has trained and excelled while working for the United States Air Force. He maintained and tested the InfoSec and ComSec functions of network hardware, software, and IT infrastructure for global unclassified and classified networks.

Max Aulakh on LinkedIn

Ignyte Assurance Platform Website

Max Aulakh [00:00:00] 

Welcome to Reckless Compliance Podcast, where we learn about unintended consequences of federal compliance, brought to you by ignyteplatform.com If you’re looking to learn about cyber risk management and get your product into the federal market, this podcast is for you. Or if you’re a security pro within the federal space looking for a community, join us. We’ll break down tools, tips, and techniques to help you get better and faster to get through the laborious federal accreditation processes. It doesn’t matter what type of system or federal agencies you’re dealing with. If you’ve heard of confusing terms like ATOs, FedRAMP, RMF, DISA, STIGS, SAAB, SARS, or newer terms like CATO, Big Bang, OSCAL, and SBOMS, we’ll break it down all one by one.And now, here’s the show.


Thank you, everyone, for tuning in to this exciting episode of Reckless Compliance. Today, we’re going to learn about artificial intelligence in context of control responses. So, if you’re responsible for writing control narratives or responding to control narratives, this is one of the main things that we do as security professionals or compliance professionals.This one is for you. 


So, before we get started, I do have a special guest with us today, and I thought about this topic quite a bit, and I thought, you know, who is going to be the best person for this? And it’s gotta be somebody working within the Air Force as an ISSM, and I couldn’t think of a better person than Uliya Sparks.


Uliya, welcome to the podcast. Thank you so much for being here for our audience and others that are listening. Can you introduce yourself? Tell us a little bit about your background. I know you were in the Air Force, so tell us a little bit about yourself. 


Uliya Sparks [00:01:40]:

Hi, thank you, Max, for having me on this podcast. Definitely very happy to be here. So my name is Uliya Sparks. I currently work with SAF, Mission Partners Environment. I’m an Information System Security Manager there. Prior to that, how I started with IT was back in 2010. I got out of military, U.S. Army, and I looked at our industry and decided IT was booming, and that’s where I got into something that I’m actually really passionate about now. Cybersecurity is definitely a big topic, and through different challenges and opportunities. Air Force started me off with PAC program. It’s a Palace Acquired Internship Program where I got to hop around and just kind of get different experiences from a technical perspective. Where I am today though, I really enjoy the policy and compliance aspects of cybersecurity and I manage a handful of systems that keep me busy all the time.


Max Aulakh [00:02:51]: 

It’s always nice to get a prior Army person in the Air Force. I know when I was in the Air Force, I was looking to go extra gung ho, so they had this program called Blue to Green. I never did it. 


Uliya Sparks [00:03:04] : 

Heard of it. 


Max Aulakh [00:03:05] : 

Awesome. So right now, for those of you who are listening, SAF is? What is SAF, Uliya? 


Uliya Sparks [00:03:11]:

Secretary of the Air Force. 


Max Aulakh [00:03:12]: 

Secretary of the Air Force. Okay, cool. So as an ISSM, I know that, you know, we, we have to answer all of these controls, AC one through whatever, right? All the policy controls and things like that. What’s that like for you? How do you do that today? Are you using spreadsheets? I know some people use Emass. Like take us through the journey of being an ISSM who’s actually managing these, these crazy systems. 


Uliya Sparks [00:03:36]: 

Yeah, sure. Excel definitely is. a lot personally, I don’t really like to use excel. There’s a lot of data that gets lost between the different people that come through, but eMASS and don’t ask me the abbreviation part right now, but eMASS is our tool that we keep.


Our packages and everything that we have to ensure that we comply with. From a control perspective, it’s never ending. It’s always, it’s one of those things that we have to continue to read on. And from a skills perspective, It’s a tedious job. So it takes somebody who is patient and somebody who is adverse in looking at different policies that come out different best practices.


You know, we got the NIST, but then we have the CNSSI the FedRAMP. Yeah, I mean, just different policies that are out there. So from a control perspective, there’s a lot of families and you have to figure out what it is that you categorize your system at, and then you have to match it to what the NIST says, what this FedRAMP says.


The CNSSI 1253, what does that say? All of those have a specific tasks associated with them. 


Max Aulakh [00:05:04]: 

Yep. Yeah. I think you’re talking in our audience’s love language here, right? All these acronyms, like CNSSI, FedRAMP, and, and, and I get it. I think it requires a detailed person. It is tedious and but I, I think overall though, there’s so much confusion and there’s just been a significant under investment in our community when it comes to like, you know, staffing and ISSM role.


Cause I hear this all the time. Hey, I’ve got 12 systems. I’ve got 15 systems, you know, and it’s one person you multiply 15 times. Three to 600 controls, whatever, that’s a lot of work. That is incredible work. 


Uliya Sparks [00:05:43]: 

Yeah, yeah, no, totally, totally agree. You know, with the booming of artificial intelligence, I am pro integrating that capability into our eMASS or a way that we do our tasks.

However, I was just kind of reading through some of our DEF, CIO’s objectives and. Number six on the objectives is data and AI and the strategy that all services should take to incorporate and innovate artificial intelligence into the environment. But as I was reading through it, it says strategy, it says working group, it says compliance.


I don’t think I’m going to see it in my lifetime, honestly, I’m still going to have to read through the different policies and match them to what eMASS tells me I need to be doing with it. 


Max Aulakh [00:06:37]: 

I hope you’re wrong, 


Uliya Sparks [00:06:39]: 

I hope too, for my sake, I’m not retiring just yet. 


Max Aulakh[00:06:44]: 

I think you’re right. There’s some.  A lot of truth to that because we’re so adverse to adopting new technology.


I know on the outside everybody is playing with OpenAI, and we could see the use case, right? Like, hey, I can give you data that is not sensitive, give me some compliance statements that’ll help me with my writer’s block, right? That’ll help me quickly respond. But, so yeah, I think you answered one of the questions I was gonna ask, do you think we’ll actually see it?


But let’s just ignore that for now, right? Like what do you think about the applicability of it? Do you think that it could actually help ISSMs actually do more work or do the work that they’re doing efficiently? Do you think there’s actually use for this kind of technology within control responses?


Uliya Sparks [00:07:30]: 

So whenever I talk about AI, I always have to give a little disclaimer,


Max Aulakh [00:07:36]: 

 Little disclaimer alright.


Uliya Sparks [00:07:36]: 

What I would say is, in my opinion, how I feel about AI has nothing to do with the rest of the world. And I don’t think it’s right or wrong in any way. But I do, where I stand with it is I don’t fear it. AI is nothing but artificial intelligence and its algorithms on our, as a human, how are we introducing the algorithm to what the capability that we want to add it to.


But with that said, I think one of our questions, one of your questions was going to be from a Sky AO perspective, how do we incorporate AI into our environments to then support us ISSMs or ISOs and anybody else who is doing this whole lot of data? Maneuvering through a whole lot of data, how do we automate it and how do we make sure that it’s a repeatable process because not every system is alike.


So with that said, I do believe that it takes, right people in the right seats. I think there’s people that want to do right by compliance and security, cybersecurity, but it may not be for them. I think the AI can help in that aspect to then make sure that things are, there’s effective control the way it’s addressed between the engineers, the admins, the way we mitigate risk, you know, from a high to moderate, but then you’re also talking about ensuring we go back to those mitigations and we’re actually addressing them.


Max Aulakh [00:09:26]: 

I appreciate you being careful on some of this because I am too, but I’ll say it candidly, like, it’s the paperwork game, right? Like, and it shouldn’t be just a paperwork game, and with AI, this paperwork game could just explode, right?


Uliya Sparks [00:09:44]: 

Seriously, I have actually had I think he was. I forget what role he has, but it’s a technical one and he straight up told me it’s just paperwork, Yulia. And I was about to, I just, how do I ensure that the security aspect of our networks and how we address the risk, because the risk is real. The threat is real. 


Max Aulakh [00:10:11]: 

No question. Ensuring 


Uliya Sparks [00:10:12]: 

That is the lowest. Joe understands that and then helps support that. And, you know, again, there’s no right or wrong answer. We just have to make sure that we do our due diligence. 


Max Aulakh [00:10:25]: 

Yeah, I think some of the controls that can be really paperwork driven are repetitive, I think of one like awareness and training. Like, okay, we have to go into whatever that portal is and click through that horrible training. I remember it being in the DoD, right? Like, we 


Uliya Sparks [00:10:40]: 

We still have it. 


Max Aulakh [00:10:41]: 

You still have it. Okay. All right. So like some of those administrative things, I think it can take the workload off, but some of the technical aspects of the control, like actual, how does the network operate? How is it actually being secured? You can’t just push stuff in. It’s garbage in, garbage out, you know? I don’t think we’ll see the use of this AI thing to answer 100 percent of everything, but I think we’ll start to hopefully, I know on the commercial side, we’re doing this, where when it comes for like administrative policy kind of stuff, we’re able to leverage it, but for technical things, we’re actually afraid to submit it’s sensitive data, like, we feel like that is not being a good steward, you know, you’re trying to do cyber, but But it’s actually unethical to submit a lot of this information.


So anyways, that’s kind of what I have seen, but have you seen internally? There was this project, Air Force’s Nipper GPT. Are you familiar with that? Yulia, have you heard about that project? 


Uliya Sparks [00:11:38]: 

So to be candid, right? No, I haven’t. And once you mentioned it to me I went and. Kind of did some background not research, just, you know, just trying to see who in the air force knows about GPT out of about a hundred people that I’ve asked, maybe two that they’ve heard of it.

So It’s clear to see that, what industry is trying to do for DoD, and I’m not speaking for DoD, and I just want to make sure that I say that, but from a personal observation, what industry is trying to do, we’re too slow to incorporate. And although it is part of. Nipper GPT is a great tool for automating and coming up with certain results that we want to see because not everybody is good at combining different data and putting it into a paragraph.


However, I think, What I saw was a conversation that was happening on it was how do we manage the sensitive data from the different systems? And before this, a couple of weeks ago, I was talking to somebody who works at FBI and we were talking about our yearly appraisals and he was directed not to use any sort of GPT to write up a bullet for our appraisals.


Max Aulakh [00:13:15] : 

Really? Wow. 


Uliya Sparks [00:13:17] : 

Yes. Yes. 


Max Aulakh [00:13:18]: 

I had no idea. 


Uliya Sparks [00:13:19]: 

Because of the sensitivity of what we do day to day and, you know, one element of, of Sentence may not jeopardize the sensitivity, but combined and AI is now combining a lot of our data. It could come out to be sensitive data. 


Max Aulakh [00:13:39]: 

It could. And I saw some of their use cases and.

I always thought, you know when I was in the Air Force, we’ve had to do those enlisted performance reports. APRs, right? They’re terrible. I never had to 


Uliya Sparks [00:13:52]: 

That is going to be the end of me. 


Max Aulakh [00:13:53]: 

Yeah, I never had a firewall 5. Like, oh, it’s got a 3 or 4. I was, I was good with that,Yep, so some of those use cases, I think for general, if you’re doing administrative stuff, I think it’s good.


But yeah, I know some of my EPR reports, they had like, I remember they were telling us at that time to quantify everything, right? Like how many deployments did you do? How many people did you guard? How many, how many X, Y, Z things did you do? So I can see what you’re saying in terms of quantification and then submitting that data into something like this.


And if you don’t know, right, if you don’t know where it’s going. So I do think it’s an interesting use case, but I always go back to like, you know, the investment in information security professionals and ISSM is extremely low. And I don’t think unless you’re seeing the change, I don’t think they’re going to say, okay, you know, let’s go hire a thousand more ISSMs, you know.


Uliya Sparks [00:14:48]: 

Just in my branch we’re supposed to have. Eight, I believe. No, 10 ISSMs for a handful of capabilities and there’s only four of us right now, and we’ve been empty since last year. So I think it just definitely shows that there is a lack of needed skill and but it’s not just the people causing everybody, there’s people who want a job.


Who wants to get hired with Air Force or something as a civilian. I think the problem is a challenge that I’m seeing between industry and DOD is the pay scales. You know, when you come in as a cyber security professional, you may get a little bit less. And versus what you would get outside, but then keeping everybody trained as well.


That’s definitely one of those jobs that you need more of, but not everybody seems to, there’s not enough of us. 


Max Aulakh [00:15:51]: 



Uliya Sparks [00:15:52]: 

I don’t know if that makes sense. 


Max Aulakh [00:15:54]:

No, it does. There’s actually, Julia, on the outside, right? Cause that’s where I’m at. We work with a lot of ISSMs that are prior. They’re managing all these FedRAMP packages and you know, we help them out best as we can, either as auditors or, or giving them the automation that they need.

And what we’ve come to find out is even those, that side of cyber, right? You have pen testers, you have all sorts of people. That side is significantly under manned and under resource, even on the commercial side of the house, Paul’s side. So this is where, you know, I go back to like, is there an opportunity to leverage AI without fearing it, right?


Without fearing it, is there an opportunity to like, reduce that burden? Because It would be nice to say, there’s 300 controls and I only need to answer 2. The DoD and the government, they’re not reducing, I mean, I would love for the NIST thing to say, yeah, you don’t, you know, you don’t need to, you don’t need to answer all 500, you only need 2, because that’s what, Matters to your system, but I don’t in my lifetime.


I feel like I won’t see that but I might see AI Applicability into controls before you know, like in the next I don’t know five years. Do you think that’ll happen in the DoD within the next five six years or like I’m dreaming and smoking something here.


Uliya Sparks [00:17:14]: 

I feel like I should say I plead the fifth based on the DOD CIO’s objectives.

And like I said, the number six was data and ai. I don’t believe that we are ready to Okay. Have that type of capability. 


Max Aulakh [00:17:32]: 

So it’s gotta be done on the outside. And maybe broad, maybe. Maybe Right. Because a lot of, a lot of solutions are Yeah. I see what you’re saying. 


Uliya Sparks [00:17:41]: 

Because I’m very careful just to not elaborate too much on what we can and cannot do.

But I think we have a whole lot of personnel who fear change and We as humans are creatures of our own habits. A lot of  people I have heard where programmers would say AI is going to take over my job, but really would it though, because we still need the human thoughts and logic applied to the algorithm that AI is supposed to be doing for you.


So, I mean, if you, if you look at it in that perspective, AI will definitely Speed up certain processes, but can it be replaced? It might. I mean, if we can figure out how to automate the controls with what we need to answer and the policies and the evidence that we do have, we may not need 10 ISSMs because then We have this automation that all we need to do is validate, verify.


But as of right now, I don’t believe that we’re ready to even, I mean, you said to yourself, nipper GPT, we can’t even  bring that on. Cause People are fearing that the data is going to be, what’s the word, “loosey goosey” it’s going to be overshared. 


Max Aulakh [00:19:03]: 

It’s a hard thing, but I’m surprised that you, you know, that you verbally heard somebody say, like, it’s going to replace jobs.


I mean, yeah. I have heard that too. I mean, it’s even in the, I think on the commercial side, we’ve heard that from marketers, right? Like, Hey, it’s going to replace all the marketing jobs in the whatever ecosystem here. Right. 


But yeah, I am of the opinion that we got to move forward. It’s here, cats out of the bag, you know, just like clouds, it’s here and it’s here to stay.

And it’s going to help us make better decisions, faster decisions and reuse. We need more ISSM.I don’t care how much AI you throw at it, like these tools, they have limitations, right? So let’s talk about that, So we’ve got an idea where in, in a perfect world, this thing is automating the control responses, not the paperwork side of the house.


It’s, maybe picking up the stigs and. Doing all the, all the ACAS, all the technical data linking in that perfect world, which I think that is actually will be something we’ll probably never reach. Cause people are always writing code all the time, but I guess what are some of the things that if you believe, if we reach that will still require an ISSM.


There are some people that probably share that feeling, Julia, where it’s like, I don’t want to use AI for my control. Responsive. What am I gonna do? What’s gonna be my job? Right. Or as a government person that’s not doing other types of documentation work. So from our perspective, when we look at it as an ISSM, what are some of the things that are actually gonna require the human touch, Like, ’cause you can’t automate it with AI. 


Uliya Sparks [00:20:41]: 

Interesting. That’s a very interesting question. I have not thought that far. Thinking through what I do now and if. If things were to be automated, we’re still having to verify data logs. Is AI performing and mapping some controls to the ACAS or STIGs appropriately?


Because there’s a human error, but how much can AI have an error because of some type of algorithm that it thinks and it’s happening today with chat GPT, where you got to have a human and I hear this often, I feel like I’m jumping, but I gotta say it. I hear a lot of professors who work with students, they would get papers that were written by AI, and these kids are not smart enough to read through it and fix some of the problems that AI has written.


So, I guess I would say the same thing for an ISSM role. Yes, it’s doing the work a lot quicker for you, but now are you capable and knowledgeable enough to then catch certain misalignment to what that right standard looks like? 


Max Aulakh [00:22:01]: 

It was going to be a lot of reading. 


Uliya Sparks [00:22:03]:

Yeah. I feel like that’s part of my everyday life now.


Max Aulakh [00:22:11]:

By the way, that’s a great analogy of students. I actually had a friend not a friend, a friend from church and he’s like, yeah, last year in school, this AI thing just came out and he used it at the very beginning before they knew how to catch this kind of stuff. But you do have to review, you have to check, right.


And it’d be interesting to talk to a SCA and I think, you know, cause we, we performed the role of a SCA too, right. And so we’ve got to read through all these crazy documents. And I think it overall. It can make the job easier for an ISSM, but somebody who has to validate the SCA and the, and the authorization official who has to look for material risk, mountain of stuff that’s coming through, it’ll be interesting.


I think if we start to use it, one side will be like, My work’s easy. The other side would be like, wow, my work is way harder, you know? 


Uliya Sparks [00:23:06]: 

Well, and I think that’s the relationship that has to be established. And I’m kind of seeing some of those challenges today. The trust factor between the AOSCO office and the ISSMs, ISOs, and the technical personnel.


I am constantly thinking, who’s trying to pull a, Big one on me, basically who is trying to use my certain, if I don’t know something about a software and how is being used within a system, how am I answering that control and am I answering appropriately based off of what I’m being told?


Uliya Sparks [00:23:45]: 

And then that’s when I have to go into the nitty gritty and details and ask, okay, you got to show me how this is working because I don’t trust you yet. 


Max Aulakh [00:23:57]:



Uliya Sparks [00:23:58]: 

There’s a  whole trust factor that I think will have to be established. 


Max Aulakh [00:24:03]: 

Yeah, I agree with you because it’s impossible to know every technology, every stack always changes.


And I know there’s a part of this that is like AI can make things transactional and fast. But another is if you can’t trust the data going in and out, let alone the person. It’s yeah. So this is where I think there’s going to be limitations on this thing, right? Like we might be able to push like an SSP that is you know, I can do a thousand pages and in five days to like.


I can now do it in an hour, whatever. Right. But somebody on the other side has to get through the pile of data. And if they don’t have that trusting relationship and they don’t understand the stack, oh man, that’s, I haven’t thought about it from that perspective yet, Uliya, to be honest. Cause right now we haven’t really gotten any packages that are built on AI.


We’ve built them ourselves, but we haven’t really gotten them at full scale speed yet, which I think we eventually will. 


Uliya Sparks [00:25:01]: 

That’s cool. That’s cool to hear. I would be curious to see and I mean, have some hands on to see what that looks like. 


Max Aulakh [00:25:08]: 

Yeah, yeah, absolutely. So I know, Uliya, I appreciate you coming on.

This is, again, a very short, short discussion on getting your perspective. So for those that are listening out there, is there anything else you’d like to add? Any insight as the person on the other side, right? The ISSM on the other side that’s typically looking at these packages. 


Uliya Sparks [00:25:28]: 

Yeah, so one of many things that I do want to emphasize and thank you for having me on the show to be able to then speak to it.


But one way to battle some of these skill resources, challenges that we do see is to start early. As a parent, you got to know and you got to teach your kids how to Be safe and efficient with technology. And one of the things that I do is I like to go out to the community and engage and collaborate and figure out how to ensure people are thinking security when they’re using their technology.


So ultimately at the end of the day, I I say that be vigilant and be in the know because knowledge is power. And if you think the answers are going to be provided to you by AI, you’re going to be pretty lost. And that’s us. 


Max Aulakh [00:26:23]: 

Well, we certainly appreciate it. Uliya and I couldn’t agree more. I’ve got little kids myself. They’re already using some of this stuff, probably better than myself for now.

So thank you so much for being on the show.  


Uliya Sparks [00:26:34]: 

Thank you. I Appreciate you. 


Max Aulakh [00:26:37]: Thank you for tuning in. If you enjoyed the podcast, head over to ignyteplatform.com/reckless You’ll find notes, links, and additional content. Head over to iTunes to subscribe, rate, and leave a review.


Ignyte Platform becomes a third-party assessment organization (3PAO), now listed on the FedRAMP Marketplace - Read More