Emerging Cybersecurity Risks

Challenges in Developing, Democratizing, and Adopting AI with Dr. Amit Shah, Founder and President of GNS-AI LLC

👉 How does the data science community view privacy in the healthcare industry?

👉 Challenges in building and adopting AI models

👉 Dealing with FDA compliances while validating AI models

SHARE EPISODE

Welcome to this episode of the Emerging Cyber Risk podcast, brought to you by Ignyte and Secure Robotics, where we share our expertise on cyber risk and AI to help you prepare for the risk management of emerging technologies. We are your hosts, Max Aulakh and Joel Yonts. Today’s guest is Dr. Amit Shah, Founder and President of GNA-AI LLC, a data science/ML/AI consulting business specializing in building data-based decision support systems. Our discussion focuses on the challenges in developing and adopting AI solutions, unifying democratized models, and the challenge of developing FDA-compliant models for the healthcare industry. We also touch on the GDPR challenges while building AI models.

Topics we discuss:

  • How does the data science community view privacy in the healthcare industry?
  • Challenges in building and adopting AI models
  • How do you overcome the challenge of data access while building AI models?
  • Dealing with FDA compliances while validating AI models

 

Dr. Amit Shah Bio:

Dr. Amit Shah is the Founder and President of GNA-AI LLC and has over thirteen years of experience in data science and AI. Dr. Amit helps businesses of all sizes unlock the power of their data with his expertise in extracting insights from complex datasets and using cutting-edge machine learning and artificial intelligence techniques. As President and Founder of GNS-AI, he is committed to providing innovative solutions to businesses to improve efficiency, automate tedious manual processes, and build decision support tools for better decision-making. He is also a keynote speaker at industry events.

 

Dr. Amit Shah on LinkedIn

GNA-AI LLC Website

 

Get to Know Your Hosts:

Max Aulakh Bio:

Max is the CEO of Ignyte Assurance Platform and a Data Security and Compliance leader delivering DoD-tested security strategies and compliance that safeguard mission-critical IT operations. He has trained and excelled while working for the United States Air Force. He maintained and tested the InfoSec and ComSec functions of network hardware, software, and IT infrastructure for global unclassified and classified networks.

Max Aulakh on LinkedIn

Ignyte Assurance Platform Website

 

Joel Yonts Bio:

Joel is CEO & Research Scientist at Secure Robotics and the Chief Research Officer & Strategist at Malicious Streams. Joel is a Security Strategist, innovator, advisor, and seasoned security executive with a passion for information security research. He has over 25 years of diverse Information Technology experience with an emphasis on Cybersecurity. Joel is also an accomplished speaker, writer, and software developer with research interests in enterprise security, digital forensics, artificial intelligence, and robotic & IoT systems.

Joel Yonts on LinkedIn

Secure Robotics Website

Malicious Streams Website

 

Max – 00:00:03: Welcome to Emerging Cyber Risk, a podcast by Ignyte and Secure Robotics. We share our expertise on cyber risk and artificial intelligence to help you prepare for risk management of emerging technologies. We’re your hosts, Max Aulakh

Joel – 00:00:18: and Joel Yonts.Join us as we dive into the development of AI, evolution in cybersecurity, and other topics driving change in the cyber risk outlook.

Max – 00:00:27: Thank you for joining us, everyone. This is the Emerging Cyber Risk podcast. This is Max Aulakh and Joel Yonts. Today, we’re going to be talking with Dr. Amit Shah. Dr. Amit Shah has been a data science professional working at Abbott Labs. But let me allow him to tell his story from his side. Dr. Amit, welcome to the show. So, tell us a little bit about yourself.

Amit – 00:00:51: Glad to be here, Max and Joel; it’s great to meet you guys. Thank you for the invitation. So, I started off with a pre-med and computer science degree from NYU, I always wanted to be kind of like a physician engineer. And so, I wanted to meld the expertise of data and artificial intelligence and computing into medicine. And so I did math, and biomedical engineering came up, right? So I basically applied for my Ph.D. in biomedical engineering, and I got into the University of Illinois, and I moved to Chicago about 12 years ago. I was really fascinated with the brain, especially in neural engineering, and so that became the focus of my studies, especially in robotics and prosthetics, and actually motion planning. So what was interesting is that my father gave me this National Geographic magazine, and on the cover was this amputee with a bionic limb. And I opened up that article, and they were talking about this amazing research being done at the Rehab Institute Chicago. When I was admitted into the University of Illinois, I scouted the campus and spoke to my future advisor. and he told me to come by his lab, which was the rehab center, Rehab Institute of Chicago I stopped by there, and I saw this lady, the same person from the cover walk down the hall and wave to my future advisor, and I was like, this is the lab I gotta be in.

Joel – 00:02:16: That’s nice.

Amit – 00:02:17: Yeah, so that’s how I entered biomedical engineering and got my PhD there. So, during the course of my time there, I basically built out virtual reality or augmented reality training environments to rehabilitate neuromotor and stroke patients, trying to restore attention in traumatic brain injury survivors, as well as influence motor variability in stroke survivors, so as to improve their motor control. My thesis was on actually identifying how people learn boundaries of instability, essentially. So, I use a reinforcement learning model to explain how people can learn new environments; it just generalized to various different motor experiments. And so that explained the learning curves very well; it was a very generalized framework, and all you needed really is an understanding of the experimental paradigm and actually mock it in the environment. From there, I went on to become a data scientist at Priscity Health, which was a biotech startup that basically provided support for cancer patients who were taking immunotherapy. As we know, like when you boost the immune system, the immune system can start attacking your own tissue, and that causes severe toxicity. And this company was actually a startup that was founded by the Nobel Prize winner in medicine in 2018, Jim Allison I got to meet him several times, and it was a really interesting experience. They were partnered with MD Anderson Cancer Care. We built out a mobile chatbot and recommender engine to identify and collect patient symptoms and actually recommend how they should be triaged at the clinic. Once we deployed it in a clinic as a trial, I moved on to Abbott Laboratories, where I became a senior data scientist and helped in the automated report generation of laboratories. So, identifying critical KPIs for how laboratories should be benchmarked against each other. Once we deployed that solution successfully, I just moved on to simulating a laboratory, like a virtual laboratory, through using a what-if model, kind of building out a conception of what a virtual lab’s processes look like. And because laboratories operate on a very tight margin, they need to plan better. They need to identify how to optimize for turnaround times so that they can have the least turnaround times and deliver the most effective amount of tests to the clinic as possible. So, there are a lot of bottlenecks in the supply chain. There’s also operational efficiency in terms of automation and the different workflow aspects. And so I have expertise in that area now where I actually built out this what-if model from it was a lot of forward engineering, so we use data to inform it, you know, from past laboratories. But I generated a Representative laboratory of a particular type and then, you know, simulated forward. And I presented that in the analytics community in Avid, and I got pretty good responses over there. I also led a team on sales forecasting and promotion ranking for Avid Nutrition, and it was a hackathon. We actually won the hackathon, leveraging a promotion ranking algorithm that I had conceived. So that was a pretty big win. And I ended up getting this really heavy paperweight trophy, which was pretty nice, although it came about a year late. Yeah, I basically received that trophy just shortly after I left AppIt. So that was interesting. Yeah. And, you know, I got promoted to data science manager over there. And I led a team of data scientists towards building out a simulation for interfacing with middleware with the analyzers for driver development, as well as examining the throughput of, laboratory tests, and that was like my final project, more or less. And that’s when I decided to exit that company and actually build out my data and AI consultancy. So my initial approach was to look at government contracts, which was where I thought data and AI were really hot, but I underestimated the difficulty in getting into the government. So I partnered with a lot of firms to try to get in, but I think the biggest obstacle is, you know if you’re partnering with firms, they’re also looking for work, you’re looking for work. And these are obviously, you know, going to be firms that are trying to gain a foothold.

Max – 00:06:35: That’s actually a fantastic background. And I think you and I would discuss how the whole government business works. When it comes to the government, privacy and security are the number one hurdle. You can’t get into the government because there’s a security concern to the nth degree. Right. So then, of course, there are other challenges too, which we talked about, but when it comes to healthcare, right, and privacy of information, help us understand, like, how did you manage some of those concerns? I mean, you’ve got a ton of experience at Abbott and prior places you were at, help us understand how privacy and security just in general are viewed by the data science community.

Amit – 00:07:14: Well, it’s a big hurdle, right? I mean, data is fragmented. It’s siloed in all these different areas. And how do you bridge that gap to actually build a more comprehensive model, better analytics, and better AI systems? That was one of my focal points at Abbott, you know, trying to address that issue, and I was, you know, pitching various solutions towards that, especially in Abbott, like, you know, you have data from diagnostics in one area and then data from medical devices in another area and so it was all over the place. And especially also overlapping projects with, you know, like similar initiatives. I think the enterprise is very fragmented as a whole, just looking at every department as its own startup, which has no cohesion. And so I think that there’s a lot of inefficiency in that respect, where you have, you know, things you can tap into. And this is true even in the government, right? The US Army and, you know, the US Navy, they have all these different depots and different bases, you know, each one has a different sort of array of intelligence around it, or a lot of different initiatives, and you’re trying to integrate all that and make it more cohesive. And I think now you’re starting to see the effort towards integrating all that and trying to make it more seamless.

Max – 00:08:30: At least we hear about it, but I think you’re right. Like Joel, I don’t know if you’ve seen this in these large enterprises, there’s a whole siloed, right? Oh, right. Everything is siloed.

Joel – 00:08:39: It is. And I think that the surface is so large and it’s getting larger that when I was listening to you talk, technology is the only way to enable it because there’s not a person in the process to track all of it. So, if I think about data discovery tools, data mapping tools, and all that. But when I heard you talk, I was thinking about the intersection of AI with your talking about the virtual lab. The whole concept of the digital twin is so popular right now, right?

Amit – 00:09:06: Yeah. That’s what my company is trying to do as well, like, you know, the whole what if modeling the digital twin, I see the what if the model is like a smaller portion of the digital twin, and you know, I see a lot of confusion between digital twin and simulation. They’re very similar in terminology and concept, but there’s one fundamental difference. A digital twin is like a really tight coupling between the physical counterpart and the virtual. You see the data streamed in real-time to the digital twin as a physical twin and the physical counterparts actually collecting that data. And that gives you a real-time update of what that physical counterpart is doing, and then you can project forward. And do all sorts of things in terms of operational efficiency, maintenance of devices, or training lining workflows. And there are so many different levels of granularity, right? You have so many degrees of resolution, right?

Joel – 00:09:59: Absolutely. And I was going to say, you know, my work at Secure Robotics, what I’m seeing is digital twins are popping up everywhere, every industry because they’re so effective. But here’s one of the things when we were talking about the earlier discussion about data: as a CISO, I’ve got DLP tools that can find sensitive data all over the place, but I don’t have any tools that can cross into these digital twin worlds. So once that data leaves the physical world and enters the digital twin virtual world, I don’t have the ability to control it, inventory it, you know, there’s some parameters around it, but that’s an interesting space. Do you inventory the entry into the digital twin world, and you track it in there, and then you make sure you bring it back out and purge it? How do you manage that industry and extra out of that?

Amit – 00:10:46: So, I mean, I’ll be honest, I haven’t really done digital twins as, you know, that scope, but that is the ambition of my company to actually start doing those kinds of projects. I think the level of experience I’ve experienced at simulations and data integration, all the fundamental components of building out a digital twin where you build out the architecture and infrastructure around it, IoT devices, and basically all those different elements, right? The math makes it, you know, pretty easy to conceive that I could do digital twins as well. It’s just a matter of scale and resolution, and how do you manage that data? I guess what I would like to ask is what do you mean in terms of the visibility of the data in the virtual counterpart or?

Joel – 00:11:27:

You know, it’s an interesting world. And I ask you a loaded question because I know one system that really works well right now, but you know, more and more digital work in the nonvirtual world in the corporate enterprise, I have tools that can tell me where my data is at any given moment. I can control its flow. I can make sure it doesn’t exit certain places and so forth. Well, those tools don’t run inside these virtual environments. So basically, it goes dark inside those environments, and depending on how complex those things are, they could be an exit point because if a digital twin makes a call out to another environment, if it’s going out in a tunnel, I can’t see, it could be a significant security risk..And I’m seeing this all over the board without a good resolution yet. And so I didn’t know if you were looking through that.

Amit – 00:12:12: That would be an interesting thing to work on and problem-solve. But I think it’s just a matter of creating more tools around that visibility and identifying and creating an audit trail in the virtual space of where that data is going, who’s viewing it, how it’s being manipulated, and so on.

Max – 00:12:27: Yeah. I think, you know, the whole digital twin, once we start to, cause it’s kind of like what you mentioned, Dr. Amit, there’s a lot of conflation between simulation, virtualization, and digital twins, right? And then, all of a sudden, you throw physical reality right in the middle of it. Right. And then we want to abstract away from that physical reality and make virtual things out of that, Right? So I think as soon as we actually understand the key differences, cause if we don’t understand the key differences between the three, like clearly, like a lot of times in this, we had the same problem with the cloud. Well, what is a cloud? Ten years ago, everybody was like, oh, that’s just a server. Now we have a three-layer model infrastructure platform, right? So, I think when it comes to digital twins, we’re still exploring that. But to Joel’s point, the first, you know, obvious thing that we do in security is to know what you’re supposed to protect. How do you even inventory something like almost a ghost of a ghost, right? And that’s a problem we don’t really know how to solve as a community when it comes to, you know, simulations and digital twins and things like that. These are very pervasive concepts when it comes to enabling artificial intelligence smart business systems in healthcare, right? In fact, you can’t really do without them. So, in your line of work, when you were at Abbott and any other place, what seems to be some number-one hurdles, right? What’s stopping us from actually adopting artificial intelligence, right? Is it knowledge? What do you think? How can we accelerate the use and adoption of these things?

Amit – 00:13:57: So, honestly, I think the landscape has changed considerably in the past couple of years, that it’s no longer as big a barrier as it used to be. A lot of startups are exploding in terms of AI now because we pretty much commoditize the narrow tasks, which are vision, audio, and text recognition. All these models are now pretty much common, and we know the patterns of what makes a good model. And the transformer is like the, you know, linchpin across all that, you know, at the moment. And so I think there’ll be more efficient architectures in place as we progress, but before that, it was really about the data, the diversity of data, and the fact that most of it was sparse, most of it was fragmented. You didn’t have enough data to really train a model. I did a webinar recently, I mean, about a year ago, on how you build AI PLCs or AI products when you have limited data. You can couple it with synthetic data, you can couple it with subject matter expertise and rules, and usually, it’s going to be an interesting interaction and interplay between all these different components to create a functional AI model. Chat GPT, for instance, leverages a lot of data across the web, which, coincidentally, now they’re getting in trouble because of GDPR. And so GDPR is a big hurdle for them, and it’s a big hurdle for just about any AI startup, really because the data that’s in the public domain, that data is not really usable for them. And that’s really what OpenAI is facing right now in terms of what they’re doing after. And there’s no clear traceability as to what data points were used to feed in and train the model, right?

Joel – 00:15:41: That opens up a whole bunch of other questions. One of the questions I had, I was just having a conversation with a large enterprise yesterday about the need to make sure data stays in the region, certain privacy regulations, and so forth. But if you want to build a medical model, you have to get data from lots of different sources and subjects. So how do you balance that?

Amit – 00:16:04: So, I think that right now, there’s been a pattern of brute force, machine learning training, or deep learning training, where you’re just feeding in tons of data points, irrespective of the value of each individual data point. There’s a lot of information in the first hundred or a thousand samples, but then after that, there’s not much use, at least in one form of data point. And also, you have to look at the distribution of data and where you’re sampling from. If you get data uniformly sampled from across the distribution, then that’s great. If you’re sampling from one concentrated region, you’re not going to get a more effective AI model, right? So that’s one caveat, right? And I think that a lot of data is not necessarily valuable; it just consumes space and money just to store it when it doesn’t have as much value as it would for an AI product, right? But it could be useful in other ways in analytics and so on. But for an AI system to learn, especially with one-shot, few-shot learning. Now, the degree of data that’s necessary has become less and less, especially with more complex models being democratized and accessible publicly over the internet.

Max – 00:17:14: So, Dr. Amit, what you’re saying is like, you know, because this is a big problem in security where data sovereignty is important. Keep my data in India; keep my data in Switzerland, right? So what you’re saying is because of the advancements in the model building itself, all of that data may not even be useful. You only need a limited set to get it to the point where it’s intelligent to some degree.

Amit – 00:17:36: Right. That’s one point, right? And then the second point is like now we have federated architectures where you can basically pull in weights from all these different, you know, models that are deployed on edge. And then, you can aggregate all those weights. And now you’ve got a more functional and as a result of learning from these different systems on level, so that’s the data. And so the data is never seen by the actual organization that’s doing the AI training, but it’s actually being deployed on edge and collecting all its weights and so on.

Joel – 00:18:07: You know, when I think about, you know, even going back nine to 12 months ago, some of the things that you’re talking about now with the latest innovation, you really debunked some of the core principles of AI even a year ago. That’s how fast it seems to be moving, it’s quite fascinating. But it makes me pull my hair out if I had it from a security practitioner’s perspective.

Amit – 00:18:28: I think we can all relate, right?

Max – 00:18:30: Yeah. Cause I think we were just talking in one of our past episodes, and this is kind of a mystery, right? To a lot of cybersecurity professionals, cause we’ve been hearing for years, Hey, you need data, you need data, you need data. But with this whole concept of edge analytics, deploying your model to the edge and you’re just picking up sentiment and weights and whatever you need in order to actually build something of intelligence, I think that is the kind of thing we’re going to have to see within the market, especially when you look at things like open AI, they want a different version in China, and they want a different version in the United States, right? But the overarching algorithm is to be managed by a team, right? That’s pretty fascinating. Very different from what we’ve learned so far, Joel, in terms of how to control the security and privacy of this thing.

Joel – 00:19:18:

Oh yeah. It’s clear we have to give up some level of control. We just need to know how much to give up.

Amit – 00:19:23:

Right. And it’s going to be on a case-by-case basis. It’s an evolving field, right? I think I have a fairly different perspective than a lot of people where, you know, big data is the most important thing. But I mean, you know, think about Shannon’s entropy and information theory. You really realize that all data points are equally important. And the volume of data isn’t as important as the diversity of data points. And especially with predictive models and forecasting, AI is not good at forecasting; let’s be clear: it’s not really good at forecasting, but it’s good at maybe near-term predictions, which are very experiential and very predictable outputs. But if you’re looking at things in the future and you’re changing fundamental business assumptions and trying to project, you have to have some level of expertise and subtlety, but isn’t easily commoditized. You have to have subject matter expertise. You have to have a level of understanding of what AI systems are capable of and what they are not. How do you model the causal effects of different outputs? Those are gonna be huge aspects.

Max – 00:20:26: Yeah, I think that’s actually a big barrier to adoption. It’s not just knowing the vernacular or the language set beyond marketing, but it’s really knowing what the capability is and what it can do. And then, furthermore, as you mentioned, hey, all of these models are democratized, right? So now it’s up to the practitioners and the experts to try to figure out how they fit in, right? And most of us, we specialize, right? We specialize in one area. I think there’s a term out there, T-shaped engineer or something. I don’t know if you know that, right? But I’m seeing talent. I don’t know if you’ve seen Dr. Amit or Joel, but talent is one of the biggest barriers to just adopting some of the newer stuff that’s coming out, right?

Amit – 00:21:08: Yeah, I think with chat GPT and generative AI, like chat GPT, I think a lot of that talent aspect is somewhat democratized as well. You’re basically lowering the skills gap, right? I don’t think education is as important as the fundamental aspects of how you reason and how you synthesize your information. Those are going to be the critical aspects. Memory and recall of certain aspects are not going to be as prioritized in the future as it is about reasoning ability and your thinking skills, but even chat GPT kind of alleviates that burden, to some extent, to some degree. And it really replicates certain patterns of thought, patterns of reasoning that a human can maybe be a little bit more creative on that landscape, but it becomes diminishing returns in the end.

Joel – 00:21:56: I guess one of the things when I hear you talk and think about it, Max, I hear what you’re saying on the talent side is that, yes, the AI service layer is getting really good at making it easy to get in. But in some regards, it feels like we’re giving the keys to a Ferrari to a 16-year-old and say, here, go take it for a spin around the block, and all kinds of good things can come of it. All kinds of bad things can come of it.

Amit – 00:22:25: Absolutely, like Tesla’s FSD beta and being, it’s kind of like that, you’re trusting an AI system to learn all the nuances of driving, collecting all these data points. And there’s not much validation behind that beyond what the accident rate per million miles or whatever it is compared to a car that’s being manually driven and so on. And so there are many different causal factors that are at play that you’re not modeling for, you’re not counting for, that could account for the difference as well. So there’s a nuance there that, you know, may not truly be appreciated or captured, but you know, it really depends on the level of risk that you’re taking on for. Certain activities versus others. Right. So maybe AI systems that have imbibed or whatever expertise from radiologists, for instance, are great at predicting or classifying pneumonia, which is a very important thing maybe somewhat medium risk or low risk, whatever compared to performing in an operating room where there’s much more high risk, you know, more immediate danger. And so that is a place where you need a lot of validation in place.

Max – 00:23:39: You know, we just got done working with a firm, and I’ll have to introduce you to them, but this is the exact problem they’re facing: their FDA compliance is a huge nightmare because of model validation alone, right? The risk of human safety and patient safety is a much bigger deal than, let’s just say, cybersecurity on a traditional breach, right? We’re losing money, we’re losing reputation, but you know, loss of human is like, you know, weighed at a totally different layer, right? And when I look at the FDA’s guidance, Joel and Amit, it’s just, it’s so outdated, man, I mean, they’re barely keeping up. So when it comes to model validation trusted AI, these are all brand new concepts that I don’t think there’s anything out there. I think we’re gonna see an emergence of a field. Before we can regulate it, which we have clear no answers to, how do we validate the models, right? So I don’t know if you guys see that as an upward trend, but that’s what I’m seeing as well as a big gap in the industry today.

Amit – 00:24:41: Absolutely, when you think about trustworthy AI and explainable AI, everyone thinks about it on the component level like the model level, the individual model level, but now we’re going to have much more intricate AI systems in place. And it’s gonna be hard to really extract the explainability and trustworthiness of an AI system just from an aggregate level, from a systems level. And honestly, I think that you’re gonna need to build these digital twins and have these AI systems play in those digital twins to understand and simulate what the consequences are. So, I think digital twins are a really critical aspect of infrastructure that will enable real AI adoption.

Joel – 00:25:22: And I think Max, I think that the other dimension that we haven’t really seen play a huge role in is adversarial attacks on AI, which is what you were talking about, Dr. Mint is that QA level, making sure your algorithms and the systems function without error. But when you enter a threat actor into the picture, and they become targeting or ransoming those, that’s a whole other vector that we haven’t figured out fully how to detect and deal with yet either.

Amit – 00:25:50: Right, absolutely. You have to think a little bit outside of the box and understand where the sources of sensitivities to these models come from, like it’s really at the input level. And when you’re looking at the inputs, stop signs colored in a different color or different geometric shape, what’s the level of impact on the AI model? And it really depends on how you train it, what kind of architecture you’re really using because people adopt either model-centric or data-centric perspective, but it’s really the interaction of the two that really forms the biases and the blind spots. A simple way to think about it is linear regression, Right? Where you’re expecting a y equals mx plus b kind of relationship, and it may be, if you extrapolate further, it’s actually parabolic or something of that sort. And you haven’t explored the surface as much, which is what OpenAI really did. Basically, they scaled up enough parameters to actually overcome the perceived limitations of what AI was capable of, right? And that was really what contributed to this influx of generative AI validation.

Joel – 00:26:54: But the problem with that is that, and we’re finding out all the time what’s in that model, themselves, we’re actually going back to the model and seeing what new things it knows that we didn’t know that it could correlate and find out use and that sort of thing. So when you get to that large, being able to map all the paths is nearly impossible. So that’s a pretty big issue.

Amit – 00:27:14: Yeah, absolutely. I think you’re talking about Bard and its ability to learn Bengali or something like that, which was surprising because there was no training data on Bengali or whatever language it was supposed to learn. Yeah, I mean, you know, we have these emergent behaviors that could be very surprising just due to the complexity, which introduces somewhat of a chaotic environment in that respect.

Max – 00:27:37: You know, Dr. Amit, here’s what comes to my mind. And this is an evolution we’ve seen on the cybersecurity front, right? Back in the day, 20 years ago, you could manually just ping a server and see if it’s online, you know, today, all of that, then it got automated through a toolset called Nmap. Fast forward ten years later, you’ve got advanced penetration testing programs, right? Essentially a bunch of scripts, computer logic, you know, attempting to do some sort of attack. We still need humans in the loop, right? To actually be an effective attack, we know that to actually inspect an AI and validate the model at the model level, then at the aggregate level. I’m wondering if humans are even sufficient. I think we’re going to need a different type of inspection, essentially an AI system that can counter another AI system.

Amit – 00:28:27: Yeah, it’s kind of like the vision and Ultron kind of paradigm in Avengers, right? You’ve got Ultron, which is like evil AI, and then Vision, which is like good AI. And I see that same paradigm playing out, right? Honestly, there’s a lot of it in Avengers. It’s very futuristic, but now it’s actually becoming more of an immediate-term reality where you’ve got, you know, like, what was the AI assistant for Tony Stark where he was able to arm that is, well, not Jarvis this time. I think it was, was it Friday or whatever, like afterward, cause, like he simulates quantum mechanics for time travel through a prompt, and that’s really what we have now. The capability that all of us are like Tony Stark now where we are leveraging AI systems to automate and be more productive. And it’s really about the origin of the idea and what our intention is that’s really gonna produce those amazing results.

Joel – 00:29:21: That’s awesome. I always wanted to be Tony, so this is good.

Amit – 00:29:23:Yeah, we’re there. In some shape or form.

Max – 00:29:27: I think as soon as this chat GPT hits the market, school systems are having trouble. Do we make it legal? Do we not make it legal? And I think there’s somebody else, another entrepreneur launched a counter of, hey, this will tell you if the homework is written by AI. So, if we look at it from the perspective of validation, it can pick up who did this. What’s the source of this? I don’t know how legitimate that is, but that’s what I’m thinking in cyber. Somebody else has to counter it interact.

Amit – 00:29:59: It’s a predator-prey relationship, right? It’s going to evolve, and the dynamics are going to be such that just like a generative adversarial network, which actually has a generative model and a discriminative model, you’re getting closer and closer to the point where the discriminative model cannot distinguish between synthetic and real anymore. And I think that’s the same kind of issue that’s gonna be played out on a physical level, like where with generative outputs of text and plagiarism detectors. I don’t think that’s gonna last for very long, honestly. I think that you can mimic your own personal tone and voice through generative AI, and you’re not gonna know the difference anymore. And so fraud is gonna be a huge issue. Cyber security is gonna be a huge issue now because of these inputs.

Joel – 00:30:45: Max, I don’t know if you’re tracking, but over the last couple of weeks, there’s been, I think, an explosion in a number of other LLMs available. It’s now open source; they’re all over the place so that GPT doesn’t have a runoff. Now, most people might not have a supercomputer in their basement, but still. Right, right. There’s gonna be all kinds of these everywhere. I imagine it’s gonna get a lot more complicated.

Max – 00:31:06: Yeah, I’ve seen Facebook, and yeah, there’s like five, six really big ones out there. Chat GPT, I think, made a brand, made a name, brought the language and the awareness on what this thing is because obviously, Dr. Amit, you’ve been doing this for quite some time, but now it’s like the most common consumer knows what Chat GPT is, right? What is generative AI, right?

Amit – 00:31:28: Right, as far as opening eyes monopoly on ChatGPT, I think their real claim to fame probably will be the quality of the output and how much control they have over it. They have supportive models. It’s not just ChatGPT; there are supportive models to actually constrain output, as well as to monitor the quality of the output and so on. So I think they are still the leading high quality generative AI models out there. But now I think there’s gonna be a ton of startups around that space. Honestly, that’s one of the reasons why I don’t go into generative AI modeling; as far as large language modeling, I’m more interested in the more sophisticated AI systems that kind of emerge from that. It’s now just a Lego block to me, yeah. It’s a super-powered Lego block. That’s how I look at it.

Max – 00:32:13: Yeah, I think you got to look at it from where’s the white space, right? And right now, I do see on LinkedIn and other places where you have like a thousand new startups every day.

Amit – 00:32:27: A lot of which are not very valuable exactly because they’re leveraging the same kind of tech stack, right? And of course, you know, they’re throwing caution to the wind with privacy issues and so on in a lot of ways. I think even in healthcare, a lot of people are throwing caution to the wind and running with a chat GPT without really considering all the different privacy risks. And this is an evolving thing where OpenAI is not even clear exactly how it’s managing all the privacy. It’s just coming into its own maturity at the moment.

Max – 00:32:58: Yeah, they made the product so sticky that I think a lot of corporate enterprises, you know, as security professionals, we can write down an AI policy, but who actually follows it? Yeah. That’s yet to be determined, right? If somebody figures out that magical prompt to get what they need out of chat GPT, right? There have already been sensitive leaks and things like that. So that’s really fascinating, man. I think we’re just in a very different world right now.

Amit – 00:33:25: Not just that, right? Like now, you have the ability for malicious actors who have very low skills, you know, at the time, able to produce zero-day hacks through chat GPT. And, you know, they don’t do it at the problem level; they’re very sneaky about it; they’ll do like, okay, write this function, write this function. And suddenly, you’ve composed a program that can exploit another system just like that. I think there’s gonna need to be more sophisticated AI models that actually simulate or model human behavior and the intent behind that. In order to really shield people safely from these kinds of exports.

Joel – 00:34:00: That makes a lot of sense. And Max, when you’re talking about an AI policy, that’s the number one question I’ve been asked over the last couple of weeks is whether we need an AI policy now because of the Samsung issue or issues. But one of the things I’m saying to corporate leaders is that a restrictive policy that says you won’t do it may be a good way to lose a competitive edge because this is such a revolutionary space. The key is to figure out how to harness its power because if you turn your back on all of it, your user population is not gonna follow you, and you may hurt your company, so try to figure out how to walk that line to avoid those pitfalls. That’s the big challenge we have right now.

Max – 00:34:36: This is just my hypothesis. I think something catastrophic is gonna happen. And then there’s gonna be a tremendous amount of pressure on OpenAI to do something; I mean, we see the CEO of ChatGPT visiting Washington, DC. There’s a reason for that, right? There’s a reason for it because I think they’re trying to figure out how do we regulate it? It’s a question for both sides, the government and the private sector, because hey, forget regulation, we need to build safe systems, and how do we protect this thing from causing damage? And on the government side, it is like, how do I even regulate this thing? Because the market is not gonna stop adopting it. I know we saw Elon Musk in the letter, but I think it’s just gonna, yeah, people are just gonna gravitate towards it; it’s so easy to use in order to make my life easier, my job easier. They just made the product so sticky that people are gonna go around it and use it.

Amit – 00:35:32: I think there was somebody who said that Chat GPT isn’t going to be widely adopted for another few years. I think he was very misled. I think it’s already democratized. Everyone’s basically using it at this point. It’s permeated the mainstream. So, yeah.

Max – 00:35:47: My mom doesn’t know about it yet, but yeah, it’s

Amit – 00:35:50: Yeah, yeah, of course. Yeah.

Joel – 00:35:53: Yes. Oh, my word.

Max – 00:35:55: Well, awesome, man. We’re almost coming up on our time here, Dr. Amit. We would definitely love to have you on the show again, but one of the parting questions I wanted to ask is, as you’re going out there in the industry and things like that, have you seen kind of an uptick in security and kind of the roadblocks that security might put up? Because that’s what we have been hearing about. In security, there are a lot of naysayers, but I’m actually seeing the opposite effect. Almost every single security person right now is saying, go adopt AI, because I think we learned from the cloud where everybody was saying no, now everybody is there. But as you do your work, how have you seen security play a role in this? Are there barriers? Are security people embracing the idea from your perspective, or is it more like the AI professionals who are saying, slow down, because this thing is dangerous?

Amit – 00:36:47: So I think it’s really the AI professionals that are saying slow down, right? I think that they’re more aware of all the risks and the lack of controls around that. And I think cybersecurity professionals, they’ve always been aware of the risks of data leaks and stuff like that. I haven’t honestly interfaced with a lot of cybersecurity professionals, but I would say with AI professionals, there are those who are on the risk side and those who love it, right? But I fall somewhere in the middle, I think. There’s a lot of opportunity there, but there needs to be some caution placed and some due diligence on the security and the privacy aspect of things, especially in educating your employees on what is permissible in terms of prop generation. How do you do it? Safely, how do you de-identify the data? If you want to put in code, what is acceptable code to put in that’s not considered IP, you know, considering the Samsung League and so on, so that’s a big factor. I think, you know, there’s going to be AI professionals who are going to serve as educators of what is right and wrong, especially.

Max – 00:37:53: That’s why we have you on Dr. Amit. I will hardly agree with you because we’re getting into an area where there’s a fusion of AI and technologists and cybersecurity professionals, and we haven’t even touched and talked to the lawyers and the legal professionals about what they think about. How are they going to write a law around it? But with that, man, I’d like to thank you for coming on our show. I think this was highly educational for me at least, and I know for Joel and some of our audience as well. So thank you so much.

Amit – 00:38:23: Thank you. It was great to be here, and pleasure speaking with both of you.

Max – 00:38:28: Emerging Cyber Risk is brought to you by Ignyte and Secure Robotics. To find out more about Ignyte and Secure Robotics, visit ignyteplatform.com or securerobotics.ai.

Joel – 00:38:39: Make sure to search for Cyber in Apple Podcasts, Spotify, and Google Podcasts, or anywhere else podcasts are found. And make sure to click Subscribe so you don’t miss any future episodes. On behalf of the team here at Ignyte and Secure Robotics, thanks for listening.