‹ All episodes

Emerging Cybersecurity Risks

The Biden Administration Hands the Safety & Security of AI to Industry Leaders!



On this episode of the Emerging Cyber Risk podcast, we cover the recent meeting that President Biden had with some of the top AI cybersecurity leaders in the industry. The podcast is brought to you by Ignyte and Secure Robotics, where we share our expertise on cyber risk and AI to help you prepare for the risk management of emerging technologies. We are your hosts, Max Aulakh and Joel Yonts.
Join us as we discuss the new initiative that President Biden has introduced concerning controlling OpenAI and the 3 pillars around which it will be based — safety, security, and trust. We discuss each of these pillars in detail, as well as the 8 commitments that were made.

The touchpoints of our discussion include:

  • Why would the government do this?
  • Who is part of this initial group of voluntary members?
  • What countries are involved?
  • What is the scope of this agreement?


Get to Know Your Hosts:

Max Aulakh Bio:
Max is the CEO of Ignyte Assurance Platform and a data security and compliance leader delivering DoD-tested security strategies and compliance that safeguard mission-critical IT operations. He has trained and excelled while working for the United States Air Force. He maintained and tested the InfoSec and ComSec functions of network hardware, software, and IT infrastructure for global unclassified and classified networks.

Max Aulakh on LinkedIn
Ignyte Assurance Platform Website


Joel Yonts Bio:
Joel is CEO & Research Scientist at Secure Robotics and the Chief Research Officer & Strategist at Malicious Streams. Joel is a security strategist, innovator, advisor, and seasoned security executive with a passion for information security research. He has over twenty-five years of diverse information technology experience with an emphasis on cyber security. Joel is also an accomplished speaker, writer, and software developer with research interests in enterprise security, digital forensics, artificial intelligence, and robotic & IoT systems.
Joel Yonts on LinkedIn
Secure Robotics Website
Malicious Streams Website


President Biden’s Announcement
Whitehouse Briefing Documents
Ensuring Safe, Secure, and Trustworthy AI PDF

Max Aulakh

Welcome to Emerging Cyber Risk, a podcast by Ignite and Secure Robotics. We share our expertise on cyber risk and artificial intelligence to help you prepare for risk management of emerging technologies. We’re your hosts, Max Aulakh 


Joel Yonts

Joel Yonts. Join us as we dive into the development of AI, evolution in cybersecurity, and other topics driving change in the cyber risk outlook.


President Biden

And today, I’m pleased to announce that these seven companies have agreed to voluntary commitments. For responsible innovation, these commitments, which the companies will implement immediately, underscore three fundamental principles: safety, security, and trust.


Joel Yonts

Welcome to another episode of the Emerging Cyber Risk podcast. I’m your host, Joel Yonts, and with me, as always, Max Aulakh. The sound clip we just heard was President Biden making comments at the end of an interesting, somewhat historic meeting where he met with some of the top AI cyber leaders in the industry, and the topic was around AI risk management and getting voluntary commitments to go after tackling some of the challenges we think are upcoming.


So, in today’s episode, what we thought we would do is we would dive into this announcement that came out as a part of that meeting and go into some of the questions around it. Like, why would the government do this who is part of this initial group of volunteer members? What are some of the key elements of the announcement, and what was the impact to our listeners?


Max Aulakh

Yeah, when I saw this announcement, it personally looked very fluffy to me upfront. But then, as we got deeper into it, it did have some interesting parts to it. And a lot of times, I’ve seen the government do this simply because they don’t have the expertise, right? So one of the questions that we always think about is,  Why would somebody do this?


Right? So you’ve got open AI, and you’ve got Microsoft and other big firms, and now the government has to figure out how to get ahead of this, and, you know, getting a voluntary commitment is kind of a good way to be a catalyst to potentially figure out what kind of rules that need to be around, you know, this kind of technology.


So that’s, that’s my thinking is why they would want to do this. But Joel, what was, who are the members? What are some of the companies that participated in this thing?  


Joel Yonts

So It’s some of it. Some of the players that you would imagine the big platform players, Amazon, Google, and Microsoft, but also we have Anthropic, which is known for its models as well as Meta.


Both are producers of large models, and OpenAI OpenAI is obviously responsible for the chat GPT craze from earlier this year. So I think that it’s interesting, this group that’s, that was brought in. It kind of goes hand in hand with, with a theme that we’re going to get into part of the scope of this was largely around generative AI, because I think they’re trying to tackle this in a, in a bite-sized way.


And so these are the top producers of generative AI technologies in the world. So I think that was another reason why these were chosen to be a part of that initial voluntary group.  


Max Aulakh

You Know, Joel, what? When I read that list, I also saw some countries in there, too. But I saw who’s not part of it, right?


Your typical defense contractors, the Lockheed Martins of the world. We didn’t see any of those kinds of players. And I think it points back to they’re really going to the source of where the innovation is happening, where the technology is coming out of. And I did find it interesting that they had other countries as well, right?


They had, like, 10 or 20 countries that are, that are listed as part of this, at least on the announcement on the, you know, on, on the actual website itself. Do you recall that, Joel?


Joel Yonts

Yeah. Yeah. There was, there was a number of countries, and it’s, it’s some of the players that you might think there were countries that were friendly to the US.


And also innovators in this space and it’s going to be important as these technologies spread across the globe because it’s these companies are first to the market with a lot of these technologies. But there’s a lot being developed now. And let’s not forget these technologies are, while the implementation is closed source, these algorithms are very much open people. It’s not something that we can corner the market on. So this problem is going to propagate far beyond these countries for sure.


Max Aulakh

Yeah, I think, you know, they’ve got obviously Japan, Italy, Israel, you would expect this, but then you see somebody like a country like Kenya, Nigeria and UAE and those kinds of countries where you typically wouldn’t think a lot of cutting edge tech is coming out of, but man, times are changing, right?


Indeed. Available. So yeah. So, Joel, the other part of this that I found really interesting was just. As you mentioned, scope, right? When people talk about AI it’s, it’s so broad and deep. And this one is, you said something like it’s limited to generative AI only, which I kind of found that surprising.


Joel Yonts

Yes, I mean, I think that the statement I’m still trying to wrap my mind around it, but the statement specifically says, you know, it was almost like there was a tremendous amount of commitments made on behalf of these companies, which was obviously great and a step forward, but it was couched around this only applies only to generative models that are overall more powerful than the current industry frontier.


More powerful than any current release models, including and lists the ones that are released from these, like GPT 4, Claude 2, Titan, DALL-E 2, and it mentions those specifically. So, I don’t exactly know how to interpret that completely. Other than it may be a way that they can get out of their commitments real easy.


Yeah. I mean, I don’t know. Basically, it’s like, okay. 


Max Aulakh

Yeah, I, I think I, I would imagine this was built collaboratively, and You know, the government kind of threw this together as like, hey, let’s research the Internet. Oh, I see DALL-E I will put DALL-E on there. Right? And really, behind the scenes, I could imagine the organizations that are doing this move beyond whatever is available to the public. Right? So, I mean, that’s what I’m imagining. Is that why limit the scope? Because everything here that we’re going to discuss actually applies. So, let’s get into the meat of it. What are some of the pillars and you know, what are, and this is where I think President Biden, he highlighted a few of these things.


Let’s let’s dive right into some of the commitments and what is actually part of this agreement.


Joel Yonts

Well, The first one we heard the terms safe, secure trust, and trustworthy or trust. And those are, like you said earlier, those are large, fluffy words. And when you start diving into what these commitments mean.


Safe could mean a lot of things, but when we start looking at the tenets of it, it actually focused largely on making sure that these generative systems were not compromised. The company’s commitment to internal and external security testing. They would release that out to know if there are vulnerabilities or they would seek to get external help in finding vulnerabilities in the system before they were exploited.


So that was the core around the safe pillar. Now Security, you want to take security and kind of…


Max Aulakh

Yeah. Yeah. I think when I look at safety, it is. It is kind of fluffy. It could go a lot of different directions and whatnot. But on security, it was, it’s really about investing in cybersecurity from an insider perspective.


And then, of course, protecting these unreleased models, and the weights that go behind the model, right? So everything that is proprietary about a model they, they’re focused on protecting that, which I find very important because a lot of these intellectual property organizations are going to be interested in figuring out a way to protect this.


And I love it that they said actual insider threat, right? Usually, you don’t see that within a security model itself at a very early at the top. So that’s, it seems to be a very big component, that c those companies that are investing in cyber investing in this AI type of technology insider threat is, is like a big deal, right?


Yep. And then third-party discovery and reporting of issues related to AI. Everybody’s been talking about supply chain risk management. So I was kind of, I was not surprised to see some sort of indication around caring about third-party risk management. Around this, were those any of, were those two things surprised to you, Joel?


Do you think some other things will be added? I only see two commitments when it comes to security. I know that a lot of our listeners are cybersecurity pros.    


Joel Yonts

You know, it did take me by surprise at first, and actually, in the details around those commitments, it talks about model weights, and it was, I thought that was a really odd detail to go to and treat it like its core intellectual property.


And then I think it really dawned on me when we looked at the intersection of the generative aspect of this and these commitments; it’s not around so much protecting the system from being hacked, but it’s really about making sure that this leading-edge generative technology doesn’t fall into the wrong hands is what I believe this is about. About disclosure and loss, because otherwise I don’t understand why it would be that big a deal for the government to say you need to protect your own intellectual property. What do you think?


Max Aulakh

Yeah. I didn’t actually think about it from that perspective because, yeah, the government could care less for for-profit entities. I mean, they care, right? They’re a catalyst. I shouldn’t say it that way, but but yeah, I, I think a lot of it is related to not getting it into, into the wrong hands, but it almost reminds me of when encryption came out and, and, you know, we had put these ITAR kind of rules where you, where you couldn’t take certain types of encryption out of the country because it’s, it’s, it’s that strong of a capability, it can obviously communication, things like that.


It kind of reminds me of that era, which was, you know, many decades ago, on how the government does not want it to go out. I don’t know. That’s what, that’s what it reminds me of, Joel, as I, as I read that and, and some of your comments. 


Joel Yonts

Yeah, no, that makes a lot of sense. And, I think that when you think about who they’re going to try to protect it against, well, let me ask you rather than me saying, who do you think they’re trying to protect against gaining this information?


I have a thought, but I want to see what your thoughts.


Max Aulakh

I mean, I think anybody, anybody that’s a that, that can gain an advantage over the US, but we can, we can all think of some countries, right? Alibaba has their own cloud, their own AI, you know, any, any kind of near, near-peer adversary is what we call it, right?


Somebody is trying to match the capability. But, I think it’s anybody that would, you know, that’s not on that list, right? That’s why we saw that list of countries. There are some countries that are not on that list. So I, I would think any of those other nations, but yeah, anybody out of the United States, I would think, you know, they want to figure out how not to release this information.


Joel Yonts

I think that makes a lot of sense. And especially as we get into the next piece, we’re going to talk in just a moment about trust. I thought that they kind of worked hand in hand together. But when you think about who’s going to use generative AI technology for nefarious purposes, nation-states are certainly one of them.


But we also know a lot of criminal threat actors will as well. And so, my thought is, well, let’s just let’s say every nation shares amongst themselves either directly or indirectly through networks where information is gathered subvertly – espionage. Right? So nation-states will get this, and it will spread.


The criminal element, however, they’re the ones that probably don’t have the same level of sophistication and keeping out of the hands of the criminals that are looking for fraud and social engineering and all that would really cut down on some of the misuses that we’re going to see impacting industry and our, and, and citizens of the world.


So, I mean, maybe that’s a large part of that commitment, but I agree. Keeping out of the hands of some of the other folks as well is going to be important. 


Max Aulakh

And then, if we could touch on the security side, I think that’s up for the fourth one. I found this interesting incident, third-party discovery, and reporting of issues and vulnerabilities.


So it’s almost like. It reminds me of, you know, maybe this was written after the whole Samsung thing where people are feeding source code directly into this chat GPT thing or, or any other model, and somehow people are able to exploit, you know, working live software that’s out there because it’s releasing vulnerabilities.


I don’t know what the intent there is, but I found that pretty, you know, pretty interesting. It’s almost like you’re you can discover other problems, almost like Google hacking, but you know. But finding incidents that are related to third-party software that’s out there. The government is interested in figuring out how to close that up. 


Joel Yonts

No, I think that’s really insightful. I think that’s very interesting. So it’s more about a disclosure risk as you see it, not as a compromise risk of the system. 


Max Aulakh

Yeah, that’s how I see it. It’s more of, it’s more of discovering the, the third parties and then, and then, of course, disclosing them or not disclosing them. Right. So, if you can get, if you can trick chat GPT into giving you information that’s harmful related to cybersecurity issues, I don’t think that you know, I think that’s what it’s getting at because this thing is really powerful, just spitting out information that it’s not supposed to.


Right. So I’ve seen some people out there use reverse psychology on chat GPT, try to get information out that it has, it’s learned from and, and they don’t necessarily want it reported out. That’s how I kind of read this, which was, which was very interesting. 


Joel Yonts

Yeah, I think so. I think that is, that is an interesting take.


One thing I did want to add, and I know we need to talk about the trust side of things, but on the security, when we start talking about AI security broader, why would you care about some of these things? Either the model weights that they talked about or this disclosure of information. There are 2 ways of looking at the problem. Part of it is getting information out, but the other is getting information into these systems or bypassing it. So there’s a whole slew of attacks that are geared toward model evasion or input attacks where you’re trying to get past whatever barrier that’s being produced by the AI system.


Could be an anomaly detection system, could be a filter for email inbound, or something that detects missiles inbound, something very kinetic, and if an attacker can understand how the model weights and how the system is constructed vulnerabilities in it, it can be attacked and basically negate the value of that from a detection capability or in orchestrating evasion.


So, I think that’s when we get broader when we move past the generative topic; those are some of the other reasons that it’s going to be important for these things, even though it wasn’t necessarily named specifically within this. And let’s face it, this is a voluntary commitment, and the scope is around generative, but this is one of the things we probably should mention at this point; I think you and I both believe these, whatever’s developed here, will be a foundation that builds on that will expand on both from a government perspective and these industry leaders to orchestrate and to use and other implementations. I think.


Max Aulakh

Yeah, I think the outcome of this, whatever it happens to be, it’ll probably lead to standards. It’ll lead to other regulations. Who knows, right? We’re at the very early stages of this, but I can see the government trying to figure out how to regulate this in a way that meets the intent versus meets the rule as written, right? Because we can write something and follow that to the T and completely miss the intent. And I think that’s gonna be one of the biggest challenges, right? Because we see that in this document, we know it can apply to broader AI, but they scoped it to generative only. Yes. Yes. So they’re not, I mean, if I, you know, was at open AI and another place is similar, I’m going to take what I apply here within these, with these commitments, and apply it to my other models as well.


Not necessarily just generative; that would be counterproductive just to apply these principles across just a single model type. So that’s kind of what I think is going to happen, you know, going forward. They’re going to take this expand, learn from it, expand into different standards and different regulatory kinds of rules.


So, let’s talk about this one. This one is kind of interesting, trust. So yep. Deploy mechanisms that enable users to understand audio and visual content, man. What do you think of that one, Joel? What’s I’ve got so many thoughts, but what this particular commitment, what comes to mind? 


Joel Yonts

So, two things come to mind.


One, when we’ve talked about trust and AI all along, it’s been around, how do we trust the decisions and the, the, the output of these AI, you know classifications and regressions and whatever, this is a shift in the definition. Now, in the generative lens, it’s how we trust what our eyes are seeing and what we’re receiving. Is it real, or is it generated from one of these algorithms? And we’ve already seen so many of the horror stories and the use cases of how this could go wrong. It could be presidential announcements that the president never made. It could be a whole slew of social engineering attacks. The fact that we’re going to now, as part of this commitment, make a way to where we can distinguish what is generated and not generated using this watermarking that you were alluding to there.


I think this is the biggest thing out of the entire meeting because it’s a problem that we didn’t know how to solve before. And now, with, if we’re getting voluntary commitments from these largest producers to these technologies, and these generative technologies to watermark, I think that’s a big risk management.  


Max Aulakh

That’s, yeah, that’s a big piece of it because if it’s, if you can’t distinguish between a human and a tech, that’s where that’s really what it’s getting to call it what you want. Deep fake AI robot, whatever. I do wonder what this watermark will look like, right? Like whether it’s going to be an actual watermark or some sort of a pre-announcement.


Hey, this is not the real Donald Trump, right? We’ve seen his latest commercial, right? With the rap song or whatever he’s got going. So I, I think, I think that’s really what it’s getting at is election hijacking, those kinds of things, but not just here in the U.S., but all over the world. I think in India launched the first-ever Anchor news anchor, a hundred percent in AI.


So that that’s wild, right? Like you lose that human personality, that human touch. And if it’s indistinguishable from reality, I think that’s what they’re trying to, that, that, cause you lose the opinion of a person, right? What does that person believe? 


Joel Yonts

Yeah, and I think that the watermark you talk about the nature will be in the other component to this when you look into the details; there’s going to be an API that’s released as well so that you can automatically determine whether the content is generated or not by one of the generative AI models.


So that will allow it to move into an automation space where filters can do this automatically, whether it be incoming web video, or even audio. And I think the nature, my guess, going back to what we’ve seen in the past, it’s probably, there might be a visible watermark or an audible watermark of some sort of banner, but a lot of, a lot of times these technologies have data patterns that are not within the visible spectrum of the human or auditory spectrum of humans, and you can put a lot of data in that. We’ve seen a lot of ways that were used in the past where you could hide data within that. My guess is the watermark is going to also be hidden in those layers. So it would be invisible, but it will also be mathematically built into where you can’t easily extract it because if it was just a simple watermark, you could go into Photoshop generative AI and, say, remove the watermark, and it would remove it.


Max Aulakh

Yeah, actually, that’s very interesting. I didn’t think of it like that. You know, I could imagine something like if you’re going to produce, if you’re going to produce generative AI, it already bakes the watermark into that production. And then, if you’re going to use it for public announcement or some sort of broad scale where it impacts society, you almost have to submit your model to check if it’s actually generative, real, or fake. I could see that sort of thing happening because, you know, part of it is that mathematical element to it, but this thing came out of the president’s office. It’s, it’s gotta have something that, a regular human that doesn’t understand, you know, some of these techniques that are used in cryptography, stenography to figure out if it’s hidden, if the signature is hidden or visible. I took it as a very visible signature. So, a general population member can understand, Hey, this is really the president talking, or this is not the president talking.


I don’t know how they’ll do it because, yeah, you could just quickly remove it or add your own.


Joel Yonts

Yeah, yeah, yeah, you make it disappear. Oh, that’s just generated. Yeah, my guess is it would have to have both because you can’t have an API, and you also can’t do it in scale unless you have something programmatic, but your point is if humans can’t see it, then there’s no value in it as well. 


Max Aulakh

Yeah, yeah, exactly. Because I think for most people who are nontechnologists, how are they going to recognize? And I think that’s where the last principle, the commitment, which we’ll get to it, it talks about addressing society. Right. And how do you do that without it?


Well, that’s just my take on it. So I think that’s the that’s actually the next one. I don’t know how they’ll do this, but publicly report capabilities, limitations societal risk. What does that even mean? And then, of course, fairness and bias. Who’s going to be the arbiter of that? 


Joel Yonts

That’s where we go into, again, I don’t want to be too negative because I think this is a great thing, but anybody can commit to that because how do you hold anybody accountable for that? I mean, it’s so fluffy. But it would, I think it’s at least to say that there’s going to be some effort in an attempt to work in this space means we’ll move the needle to some regard in this.


But I don’t know what the success criteria look like for that. And it’s not concrete. And hopefully, I mean, there are some details where every new release is going to come out with some sort of report. And when I looked at it, it didn’t; it looked kind of like the threat reports that we do today or threat analysis, where you do, you have a new piece of software, new technology coming out, you do an attacker viewpoint threat analysis of that to see how it could be abused or what could be the harm associated with it and how it might be attacked. There were elements of that in the language, so it may be an extension of the threat practices that are already in place at these companies.


Max Aulakh

Yeah, and I think, you know, when I look at the seventh one, right prioritized research on societal risks, harmful bias, and discrimination, protecting privacy, I think those are all good things. Nobody wants to be discriminated against in any way, in any negative way. But, but trying to figure out, you know, a bias analysis, a fairness analysis, right?


With a bunch of people sitting there who are philosophers and giving that report, man, I have no, I don’t even know where they’re gonna start with analyzing that.


Joel Yonts

I mean, I think that is the AI ethicist, the rise of that entire field. And there’s a lot of work to go, but again, I don’t know what success looks like.


I don’t know what the end looks like, but as long as we start talking about it, that’s a step in the right direction, my guess. But there’s a long way to go for sure. So I don’t know, from my viewpoint, what that’s going to look like.


Max Aulakh

Here’s the last one, Joel. And then, and then I’ll ask you to, you know, what, what do we think is going to be the impact to our users?


But the last one is, I found this one to be very fluffy. This goes into the ether of unknown on what this even means to deploy frontier AI systems to help address societal’s greatest challenges. Right. It’s solving world hunger right now. 


Joel Yonts

Man, I’m going, well, I was going to give a southern reference. So, in the South, if you’re going to say something trash about somebody, you say bless their heart or something at the end to balance it out. This, I feel like, is the bless your heart of AI because we’ve talked about all the risks associated with AI for so long, we’ve got to say something positive so it doesn’t come out as a negative meaning. That’s what I feel like. What do you think? 


Max Aulakh

Bless your heart.


Joel Yonts

It’s not all bad. It’s not all scary. We’re going to do some good stuff too. 


Max Aulakh

That’s really what it’s saying, right? So, but to kind of recap it, right, as security professionals, I think there’s a strong emphasis here on investing in cybersecurity and, of course, insider threat, I found that to be very interesting.


There’s definitely a focus on deep fake technology. How do we watermark all of that? But from your perspective, Joel, what is some of the impact on some of our listeners? Right? What are some of the some of impacts to to the users of this or to listeners that are actually, you know, might be using some of this technology,


Joel Yonts

You know, there’s, there’s the direct benefits and the direct things that we’ve talked about. And we’ve kind of hashed through those already. The big one in my mind is the watermarking and giving us a way to tell what’s generated or not. I think that’s going to be very important because that’s been one of the biggest concerns we’ve heard so much about. But when you start spanning out a little bit more strategically, there are a couple of things I think about. One is every time there’s something like this comes out, and there’s a discussion, we fill out our nomenclature, our vocabulary around this, and this concept of safe, secure, and trustworthy or trust.


And what I see is it’s important to build on those, and as we expand it beyond generative, it gives us a way to talk about AI within our own company in that same vein. And so. I think that’s part of it. I think the other thing is the precursor to legislation, as we’ve talked about; in the notice, they mentioned that they directly talk about they’re working toward legislation regulations around this type of thing.


One of the things that I believe is a tenant that I’ve seen in decades of cybersecurity experience is self-regulation is the way to go. You do not want governments to regulate you because it’s, well, there’s a lot of reasons why,  it may miss the mark, it may be over-prescriptive in areas, and so forth.


And, if we get serious, these companies, I think the reason they’re going to come to the market is that if they get serious about doing these voluntary commitments and we join in, and we start to build these, it means the government has to do less. They may regulate some, but there’s less work to be done, and we know it’s going to be more on-target rather than heavy-handed and maybe overly bureaucratic at times. So I think that’s the big important piece to take this, take the nuggets that we get out of this, but also start running and start using this to think, think ahead to other ways. This could be applied.


Max Aulakh

Yeah, Joel, I would definitely agree with that. If we can self-regulate, right, minimize the overhead of the government because they don’t actually know what goes inside of organizations and companies and businesses. So I think you’re absolutely right. A big takeaway is, look at the principles, look at some of the pillars that they’re discussing, and it kind of shapes the potential internal policy, internal regulation, internal management of the use of these systems, right?


So, as a user, you may not be developing any of this technology, but you can still take away a good amount of thought process here on how to actually put some guardrails around using this technology. 


Joel Yonts

Absolutely. Well, and I’ve got a, I’ve got a practical question back to you. So you’re a user of some of these company’s products, and they’ve got this voluntary commitment.


So, any advice on how we can hold them accountable for this voluntary commitment? 


Max Aulakh

Yeah, you got to pay them.


Yeah. I mean, you know, it is voluntary, right? So yeah, I’m only going to commit to what I’m already doing, I guess. But. I don’t know. I, I’ve got, I’ve got hopes that you know, this was intentionally written by not just the government. I hope that they actually facilitated this discussion. They always do. But, you know, typically there’s a, there’s a level of lip service. There’s a level of, like, all right, just sign up for stuff. Don’t be noncommittal. Right. I hope it wasn’t that I don’t think I don’t think it’s that Joel personally because, I think, like we talked about before, they want to figure out how to regulate this thing. They want to be at the table. They want to be, you know, we heard from Sam Alton, the CEO of open AI, and he said, yeah, we want to be, we want to be partners in this. And usually, you don’t see private companies running to the government. That was a very odd move. So, yeah, I think it’s going to be up to them to hold themselves accountable because if they don’t, I think they know, they know, they know that the kind of damage it can cause, that’s just my take on it.


Joel Yonts

Yeah. And I was, I was thinking, yeah, at you as a customer, I’ve imagined as, as customers, as we talk to our representatives, we can remind them of this now. Individually compared to maybe Fortune 100s, reminding of that commitment may have less impact, may have a different level of impact, but I think we should remind them and hold them accountable in every, every conversation, because whether it be a sales call or a delivery of service, by the way, you committed to these things, then, you know, this is how you’re, your my perception is that you’re doing. You know? This service you’re delivering doesn’t seem to have the principle, or it does. I think that’s, that’s probably something practical we could do as well.


Max Aulakh

Yeah. Yeah. I would hope that they would take that in as, as appropriate feedback, you know.


But Joel, I, I think this was a fascinating topic for, for a lot of our listeners, and if you’re listening in, For the next few, we’re going to cover some interesting tidbits, very similar that is happening in the market. But this piece of document, right, this voluntary commitment, we’ll have the links out for you to kind of take a look at yourself, but we just wanted to thank you guys for listening in.


Joel Yonts

Thanks, Max.


Max Aulakh

Emerging Cyber Risk is brought to you by Ignite and Secure Robotics. To find out more about Ignite and Secure Robotics, visit ignyteplatform.com or securerobotics.ai. 


Joel Yonts

Make sure to search for Cyber in Apple Podcasts, Spotify, and Google Podcasts, or anywhere else podcasts are found, and make sure to click subscribe so you don’t miss any future episodes. On behalf of the team here at Ignyte and Secure Robotics, thanks for listening.


Ignyte Platform becomes a third-party assessment organization (3PAO), now listed on the FedRAMP Marketplace - Read More