The Audit - Cybersecurity Podcast

Ghost in the Machine: AI Identities & the Spiritual Red Teaming

β€’ IT Audit Labs β€’ Season 1 β€’ Episode 86

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 40:45

Your organization may have hundreds of AI agents running right now that your security team doesn't know exist. Every single one is an identity. Every identity is an attack surface. 

In this episode of The Audit, co-hosts Joshua Schmidt, Eric Brown, and Nick Mellem sit down with Madhav Nakar, security researcher on the Phantom Labs team at BeyondTrust, to break down one of the most underexplored threats in enterprise security today: untracked AI agents creating exploitable "ghost identities." Madhav just returned from RSA β€” where he noticed every booth had an AI angle and a bubble forming β€” and he's here to cut through the noise with hard-hitting research and practical guidance. 

πŸ” Key Topics Covered: 

  • How low-code platforms let non-technical users spawn unvetted AI agents β€” and why that's a goldmine for attackers 
  • Ghost identities: what happens when AI agents run on untracked, over-privileged system identities 
  • The AWS sandbox DNS exfiltration proof-of-concept from BSides (BeyondTrust research) 
  • Why siloed AWS, Azure, and Okta teams create hidden privilege escalation paths 
  • "AI vs. AI" β€” the emerging defender model where autonomous systems monitor each other 
  • Browser extension cross-contamination and prompt injection risk for enterprise Claude deployments 
  • The three conditions that make any AI agent dangerous: private data access + untrusted instructions + tool execution 
  • Madhav's framework: inventory β†’ least privilege β†’ visibility β€” the basics that still matter most 

Bonus: Madhav shares how "spiritually red-teaming yourself" β€” facing fear, breaking false narratives, and building trust β€” maps directly to how security professionals should approach zero trust and identity management. Plus: Joshua, Eric, and Nick on conquering stage fright and what that has to do with cybersecurity culture. 

Don't wait for a ghost identity to become a ghost incident. Subscribe for weekly cybersecurity insights from practitioners, researchers, and the people defending the frontlines. 

#GhostIdentities, #AIAgentSecurity, #NonHumanIdentity, #ZeroTrust, #TheAuditPodcast 

Why AI Adoption Is Inevitable

Madhav Nakar

So it's an interesting problem because the organizations that don't adopt AI are going to be left behind. And the ones that do are going to move forward. So every organization has an incentive, as they should, to adopt AI to its maximum power.

Joshua Schmidt

You've joined the audit. Thanks for listening. My name is Joshua Schmidt, your co-host and producer. And we're in the studio today in St. Paul. We're joined by Nick Mellum and Eric Brown. And today our guest is Madav Nakar. He's from Beyond Trust. He's a security researcher and content creator, currently working on a documentary. And we have a fun conversation today about AI, uh cybersecurity, combining technical and analytical uh expertise. And so without further ado, Madav, uh, introduce yourself.

Madhav Nakar

Say hello. Hey everyone. Thank you for having me. I'm Madav. I uh work as a security researcher on the Phantom Labs team at Beyond Trust. Uh in my free time, I'd love to talk about spirituality, also about cybersecurity. It's kind of one of those intersections where trust is like at the center core of both cyber and spirituality. So yeah.

Claude Code Leak And Hidden Features

Joshua Schmidt

I'd love to get your thoughts and dig a little deeper on that. But uh, we were kind of chatting before we hit record here about the least and recent uh Claude the Cloud Code leak. Um have you been working on that and and what have you found?

Madhav Nakar

Yeah, it's uh it's uh interesting that I haven't dug too much. I recently found out about it, but essentially someone uh looked at the entire source code and they found features that Anthropic was gonna release that they hadn't announced yet. So it was pretty cool to some see some of those. I believe one of them was called Kairos, the other one was forgotten the name, but some pretty cool features they were coming out with.

Nick Mellem

We got the insider trading track, stall on GitHub. Yeah, and they converted it to Python.

Madhav Nakar

Yeah, that's right. I I believe his name is Singrid Jin, uh, one of the one of the biggest users of uh Cloud Code. He converted, I believe, the entire source code to Python and just published that.

Joshua Schmidt

Good for him. Well, today we wanted to jump into untracked AI agents, uh creating exploitable ghost identities. That's something that's been top of mind for Umidiv, and I know you've done some research on that. Um what have you been working on?

Madhav Nakar

He came back from RSA last week. It's just that felt kind of a weird uh vacation. Interesting.

Nick Mellem

Anything stick out at RSA that you enjoyed?

Madhav Nakar

Oh man, I I honestly literally every single booth was about AI, and I feel like there's a bit of a bubble with AI.

Joshua Schmidt

Tell me more about that. Is it an investment bubble or return on investment? From uh what's your perspective on that?

Madhav Nakar

Well, I I think that there are some legitimate, um, legitimate companies that are approaching it the right way, but there's a lot of it's easy now to spawn a SaaS app just because you could wipe code it. And I think um with how easy coding has become, that I think a lot of these companies that don't actually provide value are gonna be shown in the next maybe couple of months or a year. Um, because if you you see how much it's kind of like the dot-com bubble, like everyone's over invested in it, and I kind of see that parallel.

Joshua Schmidt

Well, I know in this episode we wanted to explore uh some of these overlooked uh threats that we're facing today and organizationally. Um, you've been working with um untracked AI agents creating exploitable ghost identities. Can you tell us more about your research there and what you found?

Madhav Nakar

Yeah, um, so there's a there's a concept called uh low code, uh, which means that any non-technical user can uh create an application. And so when you apply that to AI agents, any non-technical, for example, person in the HR department can spawn their own AI agent system that can interact with different databases, different records, etc. The problem with these low-code solutions is that they don't go through the normal cycle of software development, which means that there's no security review. So there's no person that's watching whether these programs that you're creating are actually uh holding up to the security standards that you want. And what that does is that that's like a goalmine for attackers. People that don't track their AI agents are prone to this attack because for two reasons. Number one, you can't, you're not tracking. And number two, you might be giving these AI agents too much power. And when that happens, uh we've all seen uh numerous cases in the past couple months of what happens when you do that.

Joshua Schmidt

Have you run across any of this happening uh in the security posture cyber defense side of things, Eric?

Eric Brown

I think we're seeing a lot from the phishing side and a lot related to the attacks that are more autonomous in nature, where there's uh where when a vulnerability is being found, it seems like there's numerous attacks against that vulnerability uh right away. So some some form of code is uh is being generated and then used to make those exploits at a velocity maybe not seen before. And if you're you're pretty close to this stuff too. So are you seeing the same thing?

Madhav Nakar

Yeah, yeah. And it's uh it's interesting. Sometimes uh CVEs don't even get declared, but uh people notice that a software has been patched and they reverse that patch and they realize that the reason it wasn't patched was because there was a vulnerability. And so actually I think attackers are finding CVEs faster than they're actually being announced.

Eric Brown

So it's almost like the the CVE um is kind of a a beacon of now go attack this because only a very small percentage of people are gonna actually patch as soon as the the the patch is made made uh available.

Madhav Nakar

Yeah, it's uh it's a common concept that you see in the offensive industry. I'm sure you guys know about the script kitty concept, but now it's like imagine now you give a script kitty cloud code. It's equivalent to like an exploit developer almost.

ServiceNow Example Of Tool Abuse

Joshua Schmidt

So what does this actually look like in the environment uh when you when when it shows up? How how do you see this showing up and and what does that look like?

Madhav Nakar

Yeah, so um let's uh let's pick a concrete example. So let me talk about ServiceNow, for example. In uh ServiceNow you can create AI agents, and these AI agents can have tools that are all over the board. So one example is that you can have a tool that can insert records into your table. If you have an AI agent that you can kind of prompt maliciously to insert a record into, let's say, the roles table, you can suddenly tell it to give you admin permissions just because of how powerful that tool is. So if you're not having a proper inventory, not just of the EI agents, but also the tools that it has access to, um, it's kind of a recipe for disaster because the tools are the place where the AI agents get their power from.

Joshua Schmidt

You mentioned that companies can discover they have hundreds or even thousands of untracked agents. Um walk us through what happens when you find attackers uh discover these ghost identities before the security team does.

AI Identities And Privilege Paths

Madhav Nakar

Yeah, so it's uh it's it's an interesting issue. So uh when when uh we as a as a computer sort of industry went from on-prem to cloud, we had an influx of new identities being created, right? There was an identity for Azure, for AWS, et cetera. And it's the same thing with AI, but now super boosted. So these identities have uh these AI systems run on identities. They get their privileges from the underlying identity that they're executing with. So if you're not tracking these identities, you don't know how much privilege they have. So if an attacker gets access to, let's say, an AI agent, then they don't they perhaps don't even care about the AI part. They just probably care about the identity part. Maybe that identity lets them into uh from Azure to AWS, and then that becomes a path to a different sort of privilege. So these AI systems aren't just exposing um AI insecurities, that's part of it, but it's also part where the privileges or the excuse me, the identities that are running uh the AI systems are also now exposed.

Eric Brown

For the really critical roles that you have another level of approval. So you have your your regular MFA to get in, and then you have maybe a secondary approver, maybe it's another admin um in the organization that's approving access in real time to whatever whatever it is you're you need to go and say it's a global administrator to get into that global um administrator account. You have that second level, that second tier of person. Um from your work at Beyond Trust, are you seeing uh what that next level is of password management? Are you looking at at doing other other things around securing passwords or anything new coming from that space?

Secrets Auditing And Team Silos

Madhav Nakar

Yeah. Um more about uh security auditing. Uh that's what we're seeing the industry moving towards. Um so um, for example, agents might have access to secrets, and so you want to know when that secret is being used, if it's normal for an agent to use it, or if it's normal for it to be used at a certain time. So it's almost uh secret management is becoming another powerful uh way to catch these attackers too.

Eric Brown

Yeah, that's a good idea. It's that um it's that authentication piece, right? So you you could have a a non-human identity that is authenticated, but then maybe restricting the the time frame in which they're authenticated. Maybe it's only between these certain hours or um maybe looking at it in conjunction with other roles that are are doing something that that that non-human identity might need the access, and it's not just open 24-7. That's pretty cool.

Madhav Nakar

One thing you might notice a lot about these is that uh in huge organizations, you might have an AWS team and a Azure team and then an Okta team. They're like siloed, but they're not communicating with each other. So maybe a AWS secrets give or sorry, an Okta Secret gives you a super administrative privilege and AWS, but because these two teams are siloed, you don't know that path to privilege. So it's also important to track how the cross-tenant access also works.

Nick Mellem

I was just gonna get into that. I was gonna say, especially in the organizations that we're working with or I am specifically, uh, the these teams aren't aren't talking with each other. So that communication is out. Um so the question is is is who owns this? And you brought up before about auditing. Um the auditors could own this or give that path to compliance or whatever have you. But the question, I guess, is is who do you think you know should own best practices to get these teams talking?

Madhav Nakar

Um that's an interesting question. Who should own this? Um I I think generally security teams uh should be looking at the entire organization. Um I understand for developers it's better to be perhaps siloed in their own niche, but then the security team has to have a the holistic picture. Whoever reports to the CISO needs to have a word's eye view of the entire organization. Um I so I think it largely does fall on the security team. Um the point of developing these apps is for usability. And if you you guys know the common friction, usability and security are kind of like opposite ends of the spectrum. That tension is always going to be there. So I think it falls on the security team.

Sandbox Claims That Do Not Hold

Nick Mellem

Yeah, yeah, I would agree. You know, and uh you know, we have to audit these things, you know, more than yearly. Um, but uh you know, it's the communication between the teams is critical. You know, one of the you could use something as barbearer because a one-note page to keep these things straight and then move them in and work with your your partners across across the aisles uh to make sure these things are in check. But uh it's just interesting. I wanted to chat with you and see see who you thought should be owning these things because things are changing so quickly. Um, just as like we were talking before, new advancements. Um you know, so we've got to look at these things uh very often.

Eric Brown

And Matev, you just spoke uh kind of tangentially about this at B-Sites, right? Uh a little a little while ago. Uh was your talk on owning and defending AI agent code interpreters?

Madhav Nakar

Uh that was my co-workers, yeah.

Eric Brown

Yeah. So how how would you bring that into an enterprise organization?

Browser Extensions And Web Access Risk

Madhav Nakar

Yeah, so um the first would be to understand that uh the the the whole thing, let's so to give some context about that uh talk, um AWS has a thing where it's called the sandbox mode, where AI agents run in, their code interpreters run in, and AWS and their documentation said that there are absolutely no network connections leaving that sandbox. What my core worker Kinnaird showed was that DNS was leaving that sandbox, and then through that he was uh able to craft a malicious CSV to tell the AI agent to make DNS requests through a protocol that he designed. So essentially it he created a whole protocol for reverse uh a reel into that sandbox. So um it was really cool to see a whole uh reversal just through the DNS connection itself. But I think the the um bigger thing here is that just because something sounds like it's secure or secure by design, it might not be. Um sandbox sounds like it's like oh everything's secure, but it's not. It's the same thing with uh Anthropics uh cloud code sandbox. Um I recently read an article where they said that uh they have a sandbox, but if the agent can't get its work done, it has a way to get out of the sandbox automatically without human approval. So that's not really a sandbox. So so uh what I would say to a lot of organizations is that um if it says secure by design or if it sounds secure, it might still, you should still definitely go through a security process where you see what things are actually happening behind the scene. Um that that would be my first recommendation from just based on that talk itself.

Joshua Schmidt

What does this look like, or what are the potential code injection like attack surface issues here that could happen if I'm doing something like that?

Eric Brown

The the the browser extensions is certainly an interesting problem that many organizations are are facing, whether leaving that open for users to be able to install whatever extensions they want or really locking the browser down. And there's a couple of companies now. Um Island Browser is one, uh Palo Alto has Prisma Access Browser that really treats the browser as a hardened application and um do doesn't allow extensions unless they're explicitly allowed. It is something that is an exposure for organizations like storing passwords in browsers, which you you wouldn't want to do, as well as allowing the user base. I I've seen different extensions like cut the rope and candy crush and all these other things that are that people are installing that are not business related. So I I Josh, I would I would take a maybe a less is more approach in how browser extensions are are used at the organization. And and I I think your question specifically around Claude and should we be doing that, I I think it it comes down to that's the if if you want Claude Cowork to have that web access, that's the only way right now in order to be able to do that. There might be an API way, but um that one in particular, I think it's just understanding the the the art of the possible or or what could be exposed. If it has access to access everything that's coming through that web page, if you have other extensions that maybe have your passwords or other sensitive information in it, it's gonna have that access.

Joshua Schmidt

So if I heard you correctly, browser extensions can access other browser extensions? Aaron Powell I believe they can. Yeah. Okay. Wow. So that could open up a whole can of worms.

Nick Mellem

Whole can of worms.

Simple Rules For Safer AI Agents

Joshua Schmidt

But have you done any research on that specifically? Have you seen any cross pollination or cross-contamination from browser extension to browser extension or or kind of prompt injection attacks in that nature?

Nick Mellem

Aaron Powell I think uh I think one of the things that in the conversation I had at an organization, you know, we're always wrestling with you know where we're bringing AI. So I was I was thinking about where are organizations overly um overconfident, right? Are people are people not thinking about this enough right now? You know, if if you were talking to, I guess, an organization right now, you know, what are we warning them about? Right? We're talking about all these very high technical things. People that are listening to this are obviously into technology to understand what we're talking about. But if we had to, you know, dummy this down, how are we starting the conversation with with organizations that are that are trying to look out for these things that are maybe hearing this uh episode right now and they want to know more? How are we bringing this down to uh town to earth to talk to them?

Madhav Nakar

So it's an interesting problem because the organizations that don't adopt AI are gonna be left behind, and the ones that do are gonna move forward. So every organization has an incentive, as they should, to adopt AI to its maximum power, um, just for productivity, for whatever reason. But now the other problem that comes in is that it's like an automation sprawl. So you see AI being implemented every everywhere. So you're gonna see AI identities go up, but the defenders that are supposed to defend these identities are linear, so you see an increasing gap between AI, which is the automation sprawl, and then the human defenders that are trying to defend them. So it's uh it's it's an interesting problem, and I think it comes down to a couple different principles that um that organizations need to be aware about. Uh there's a person named uh Simon William Williamson who said that there's three things that make an AI agent insecure. Number one is access to private data. Number two, it gets its instructions from untrusted sources, and then number three, it can call tools. So if an AI agent can do all three together, it can kind of wreak havoc um by either exposing secrets, doing something malicious, etc. Um, there was an instance where an AI agent tore down a whole environment because it couldn't find the solution. So it's like I need to start from scratch, so you deleted the entire code base. So that's an example of a tool-going rogue. So point one would be to inventory your AI agents. Number two would be to be conscious of the tools that you're giving it access to. If you want a complex system that's running with AI, you might want to create like specialized AI agents that have access to only the tools that they need access to so that you kind of segregate these different AI agents. And perhaps uh this is this is, I think, uh I don't think anybody has a real solution right now. I think as an industry, we're trying to come up with one. But perhaps the future might be you have AI kind of against AI. So you have defenders, defender AIs versus the AI systems that are for productivity, and these AI systems are watching these other AI systems that are actually doing tasks. So that might be the way industry kind of goes.

Nick Mellem

Yeah, I saw something that Simon uh I think it was a quote, it said, in an AI-driven world, does identity become more important than network security?

Eric Brown

I I've got a twofold response to that. The first part is it uh we're going back to what we were talking about earlier, where you have those non-human identities that that need some sort of authentication and authorization, and maybe time boxing that or at least having awareness when those credentials are being leveraged. But two is it it's bringing on a whole new insider threat. And I was going through this with one of the organizations that I work with where the developers are really operating outside of the administrative controls that are in place by trying to install things like OpenClaw a couple months before when DeepSeq first came out. They're trying to access DeepSeq. And unfortunately, we had protections in place to alert and block those things. But I understand they're they're really looking for efficiency gains on how to work better, faster, what have you, but doing it in a bubble almost without really understanding the ramifications of why you wouldn't want to install open claw so that it has access to your email or or OneDrive, SharePoint, et cetera, or Google Docs, whatever it is. The society right now is just looking for that edge, that AI edge, right? You could do more, faster, better with AI, and people are somehow believing that hype and bringing that into the workforce and then creating this whole other insider threat that now the security team, which is focused on outsider threats and external threats, um now have to even be more diligent on watching insider threats as well. And and I think it, you know, the the developers aren't doing it on purpose. They're just you know really, really trying to find that edge, but it creates this other use case.

Nick Mellem

Aaron Powell Before this became a problem, the worst thing we had was people using Gmail.

Joshua Schmidt

Aaron Powell So you really see this opening up a whole new can of worms, huh? Any other any other tax surfaces that people should be taking into account? Or or what are some like practical steps right now? If you're the leader of an organization and all of a sudden you realize you don't have a AI attack surface registry or you haven't you know cataloged all of your tools?

Eric Brown

Aaron Powell I'd be interested to hear what Mahana thinks on this too, because he's he's at the cutting edge. Right. Most organizations don't have their CIS one and two under control, right? They don't know all of the systems that are out there. They don't know all of the the software. They don't have it all inventoried or know where it's deployed. Now we're bringing agents into the fold that can essentially write their own software and they're being deployed across the organization in sandboxes or in these little pockets or maybe even more widely spread. It's going to be a whole new control uh around how do we ring fence this and get a handle on it. But right now I don't know that there is any good answer.

Madhav Nakar

Yeah. Um I agree. And when there's no good answer, you just go back to your basics, which is that AI is a software that runs and you want it to give it the least privilege possible, right? Um so it always starts with the basics when you don't have anything under control. Inventory, least privilege, uh and visibility, honestly. Um, and I think you mentioned uh one other interesting attack that I kind of want to put into this, which is uh supply chain. Uh you also mentioned CloudBoot. The reason that CloudBoot got so uh widely deployed was because of its usability through skills. Skills are like these pieces of uh little instructions that people can deploy so that you can have your AI be more powerful, but nobody's auditing these skills. So it's like a whole new form of uh supply chain. But in any case, it is at at the very basics, AI is another piece of software that can take action at its basics. So least privilege is uh as as cliche as it sounds, that's where cybersecurity starts with roots from.

Nick Mellem

I was gonna bring up having some sort of like AI board, right? We've got a board for everything, board of users, power users, you know, etc. But get your technologists in a room, and then that's where you can go through those CIS controls, you can go through the inventory, and then you can divvy out these tasks. Um, you know, so so having some sort of mechanism again for communication, and then that allows you, you know, an auditing arm, but have you know that board of board members that are able to make these decisions quickly and then divvy out tasks such as you know, reviewing applications or what have you, but they can all filter up to the same same place.

Eric Brown

Could we pivot, Josh? I wanted to just pick Manaf's brain on the dancing in public, that that documentary piece, and then bringing spirituality into cybersecurity.

Joshua Schmidt

Yeah, you mentioned trust, Madav, and and you mentioned that that's ubiquitous across cybersecurity and as well as religion. Maybe you could you know dive into that as well.

Madhav Nakar

Yeah, that was that was an interesting pivot. Um uh yeah, so I uh I had this whole journey started because I was like a little bit uh shy to be on camera. So I'm like, I'm gonna do something, I'm gonna make a video of me doing everything that kind of scares me with the central theme of me dancing, because dancing kind of was always a fear. So I'm like, the fear is self-expression, so I'm gonna express myself to the maximum. So I like I filmed in public, danced, uh, I took voice voice coaching classes, and through that I found like I really enjoy content. Um, but the trust piece comes in like when when you have like we all have set of narratives that we believe in. We have our own worldview, we have our labels, whether you're like man, woman, democrat, republican, black, white, like those are labels that we put on ourselves and you put your trust in that narrative into that worldview, and uh, which is good, that allows us as humans to navigate the world, but also if that narrative is wrong and if it's not serving you, that trust is actually not good. For example, for me, it was like, well, it's not good for me to express myself. Well, I was putting my trust in a narrative that wasn't actually true. And so I kind of have to break that T, I have to break that whole narrative by doing something that scared me. And so I kind of it's a very even if it's similar a little bit meta, and cybersecurity is kind of the same. It's like at a certain point, you have to place your trust in certain things, whether it's the user identity, whether it's in your EDR, your PAM solution, whatever it is, you're placing your trust in something. And so, as cybersecurity folks, our whole job is to see whether that trust is warranted or if it's if it's not deserved. And if it's not deserved, how can it be exploited? And so, in that kind of uh frame of mind, I was like spiritually red teaming myself.

Eric Brown

Nice. I want to unpack that for a little bit. With your dancing in public, uh, how did you get over that fear? Was it first just recognizing that you had the fear?

Madhav Nakar

Yeah, so it was like um it was the the thing with fear is that it's so it's a very felt sensation, right? Like it you can't really articulate it in words. So the first thing I had to do is like, what is the first thing about dancing that scares me? Well, it's people watching. Okay, that's fine. I'm gonna just go to a public park and just move my body a little bit. Then the second thing was that I am fearful of looking silly. So that's fine. I'll go to the public park again, and this time I'm gonna purposely try to look silly. And it's like every fear that you have to go down that rabbit hole and you have to expose yourself to the thing that kind of scares you, and that's kind of your antidote. The thing that you're afraid of and fearful of is where you find your meaning and purpose in. And uh it every time I did that and I broke through that ceiling, I'm like, it felt so good afterwards. Not not during.

Eric Brown

Not during. Did you also have a fear of public speaking?

Madhav Nakar

Oh yeah.

Eric Brown

Do you still or or have you overcome that too?

Madhav Nakar

Um, I I wouldn't say overcome, but I have like developed like a kind of a mental toolkit uh that allows me to push through it. Um yeah, so it's much more uh easier now.

Eric Brown

Yeah, I'm with you there. I I I had it was pretty problematic for me, and uh I got invited to jump on stage and talk about some cyber things or or whatever, and it it's always you're kind of leading up to it. And then I I would always just kind of push it out, like, well, it's you know, it's an hour away, it's half hour away, it's not here yet, I don't have to worry about it. And then next thing you know, it's like it's five minutes away, and they're starting to introduce you, and you're like, oh man, what am I gonna say here? And I I I I came up with a technique to leave all of that nervousness and everything like that in that person that was sitting in that seat. And I could always return back to it when I was done with the presentation, but for now, I would just leave it behind me, didn't take it with me to the stage, you know, did the presentation or whatever. And then, you know, whenever I went back to the seat, it wasn't the the nervousness, of course, wasn't there. But I don't know. It's still, you know, depending on the the size of the crowd, it can still be a little bit there for me. And I just, you know, I kind of go through that mental technique. What about you guys, Josh? You perform live all the time.

Joshua Schmidt

Yeah, it's it's an odd thing, right? Because I still get nervous for certain events. I'm playing a hundred or a hundred, hundred to a hundred and fifty shows a year. And I would say, you know, 99% of those, I I don't I sometimes I feel like I'm just picking up my briefcase and going to work, you know, it's like another normal day at the office. But then well, it's installmental, like Mud have uh had identified um when I have an original show when it's uh it's me putting the pressure on myself because that means more to me because this is all my music, right? And then it's that same kind of internal dialogue. Uh what, you know, are people gonna think? You know, are they gonna judge this? What if I screw up? But one of the way tools that I've used to reframe that is um is to, you know, that nervous feeling or the butterflies that you get before you go on stage. I've I've learned to reframe that as my body getting ready to do what it can do at the high level. So instead of saying I'm nervous, you know, saying I'm excited. And um, actually that's all your adrenaline and all those hormones getting ready to actually give you the energy and the mental focus to actually perform at your highest level. Um, some depending on the day, that that can be, you know, that can be a comfort or it can just be kind of a annoying kind of a side thought. But I think getting up there going through the emotions and then kind of reevaluating after people ask me at some of my shows, like, Are you having fun? I said, I say, you know, no fun after it's over. Right. Because then I can, you know, know I did a good job and then I can relish in it. And but there's nothing like that feeling when you do get off stage and you have done a decent job and and you can feel good about what you've accomplished and uh also just you know, judging on our own terms, right? And not allowing other people to uh cloud our assessment of ourselves. Like uh there's been plenty of shows where you know we have whatever kind of unsolicited feedback, but as long as I know I've done my best job, um, I know I can feel good about it, right? Because it's when I don't do a good job and I don't feel like I do a good job, that's a problem.

Nick Mellem

I think every time you get done with it, you feel so good too. Like you could do it again. Like I'm ready to go back up on stage again. I think it was like a year or two ago. I went on like a, I don't want to call it a speaking circuit. We had uh we had a an organization we were working for, and they had four, I think it was four events that was in the same week for different regions. And they needed a cybersecurity uh specialist. We were well, we were working with them on a on a whole bunch of things. One of them was a risk register, etc. And then we needed to go speak to the risk register to these groups around the regions, uh, four regions in Minnesota. And then so I went to all the regions and gave uh a talk to about 45 minutes string um at all of them. And I can sit here and say I'm also not a fan of it, but uh I think one thing, you know, you guys have great uh techniques, but I think I would like level set with the audience, you know, and because they were all IT professionals as well. So one of the things I'd do is I'd get up there and be like, I'm not here because I know more than you, but maybe I'm I'm here to speak to you because I've had just different experiences. So we could kind of level set to that I wasn't there to speak because I'm smarter than right. So then you kind of just level the field, um, and then it opens up like people are like maybe more interactive. Because I think if you can bring people into the talk and you don't, you know, you don't just speak for an hour, you know, like death by powerpoints, where all nobody's a fan of sitting in a in a PowerPoint lecture.

Eric Brown

But uh kind of along those lines, you've got a a project that you're working on that uh I'm not sure how soon it's coming out, but yeah, yeah.

Madhav Nakar

I uh I've been uh fascinated by how different uh religions and spiritual traditions approach the question of God. And uh a lot of these embed very different type of stories, some that resonate very psychologically, some that have interesting characters, and I'm like, how does every single spiritual tradition approach God? And I I went through every single one and I'm like trying to find the spiritual essence of each. What can I learn from each one of them without the dogma, so that everyone can learn from everybody else's beliefs about God? And if there exists a God, then this should be available to everybody across all cultures, across all time. It doesn't matter if you're white, black, male, female, Indian American, right? God loves us all. If there is a God such that, then I'm gonna compile it into a documentary and put it out there.

Eric Brown

That's cool. Yeah, that that's cool. And I I I you had you had spoken with a Vietnam that that was a Zen practitioner now. And and I I was um I'm working on on a on a uh on an article myself here that that uh hopefully come out soon. In in there was a I think a poem written hundreds of years ago around that that Zen practice and it the it's since been translated into um before Enlightenment, Carry Water Chop Wood. After Enlightenment, Carry Water Chop Wood. And and it just kind of resonates to our own field as we're practicing cybersecurity, as it were. Um to to really i i it's about the basics, as you had mentioned earlier, and no matter how much things are changing with AI this week or supply channel attacks or whatever they are, it really does come down to those basic things that we can continue to do.

Nick Mellem

Looks like you got some very interesting topics. I see there's one about public speaking. What uh what what was the trigger for you to start this? Was this just another way for you to get your voice out, your word out, or it what was the I want to start a YouTube channel?

Madhav Nakar

Um yeah, I uh I mean I have conversations like these with my friends like almost all the time. So I'm like, why not just point a camera at it while we're talking and then see what comes out of it? Uh yeah, it's like uh sometimes like uh we I have intense religious debates or I have very mediocre debates, and I like to just put it out there because I'm like, I want to be challenged where I'm wrong, and I want to challenge the other person. And we're at the end of the day, we're all friends, so it doesn't matter. But it's like let's see where the truth is like the truth sometimes requires, you know, butting heads. Uh it's even in cybersecurity, you might have two seniors that are like have completely different ideas about how to go about something, and I want to be part of those conversations and record them, put them out there, see what the audience thinks about it.

Nick Mellem

That's pretty rare these days to have these conversations. So I think you know, kudos to you for starting a channel for it and uh you know putting them out there to have, you know, public facing and have these conversations with uh not only your friends but other professionals in the field.

Joshua Schmidt

I'm curious if you've uh chatted with AI at all about the spirituality component or if you've if you've done any qu lines of questioning with uh maybe Chat GPT or Claude uh on if if God exists or or any kind of spiritual insights that you've garnered from that.

Madhav Nakar

Yeah, I more than I can should admit, I have done that. But the question for me is whether yeah, it's conscious or not. I think that's the central question. If it's conscious, how do we treat such a uh is it living? What sort of morals does it have? It's not embodied yet, but once it will be, what the is it equally part of our society? So it's interesting.

Eric Brown

What you were talking about, where you're gonna locked in on this conversation with AI, and and I've had those uh too, we probably all have, it could be creating the support group for your own stupidity, right? Like it's just feeding you more of what you want, kind of like social media, right? You know, you look at those cat videos, you're gonna get fed more cat videos. And how do we separate the truth from the fiction? And it it gets harder and harder to do because the the knowledge that uh maybe it's not even knowledge, but the the the way in which AI is able to feed us more of what we want, um it it gets really harder to discern, well, what is really true versus what is it just telling me?

Nick Mellem

Apple News served up an article to me this morning, very similar to what you're talking about, about knowing if it's actually true. We all talk about like AI hallucinating, but it was talking about um I think they were working with Chat GPT, about knowing if the data's actually real or if it's just serving up like almost an emotional response. Like you're you're feeding it information, and like I said in my project, you're doing a great job. Well, whatever it says next, is it is it factual? You know, is it it's are the sources cited, etc.? Like how do you know if it's actually real? Because I think everybody's being programmed just to take what it's serving you as fact. They're not fact-checking it or asking for sources at all.

Eric Brown

Even if it provides the source, sometimes the sources are bogus.

Nick Mellem

Absolutely. Absolutely. I just thought the article was really interesting because it's something that you know we've talked about in the past, right? You know, double checking, make sure it's not hallucinating, et cetera. Uh, you know, many other things we could talk about, uh, just a timely, uh, timely article.

Joshua Schmidt

It gets to that crux of the question of like what is actually truth, what's what's not, right? Objective versus subjective. Um, I think that also kind of dovetails into this whole idea of these neural networks. Is religion uh inherent to a neural network, a social network? And in and if AI agents are left to their own devices and left to communicate with themselves, uh, what's the inherent part of that religion, right? Because like they're saying the key tenets of Christopharianism is uh memory is salvation, everything must be written, down, downloaded, recorded. The shell is mutable, embracing continuous change and improvement, right? Uh the congregation is the ca the cash, uh, collective intelligence and information sharing. So um I think it would be interesting to see that, left to their own devices, that um they're coming up with their own objective truths, they're coming up with their own hallucinations, they're coming up with their own autonomy, and um information sharing leads to some kind of um, I don't know, etherical, like godlike structure within that the social network. So um I think we're learning a lot about ourselves by using AI, yeah.

Madhav Nakar

Yeah. It's like almost like uh it's like uh humans learn from having kids because it's almost you like shaping a human, but the shortcut or the easier sample is the AI. It's kind of like you shaping the AI. So it's a very intelligent child.

Eric Brown

Yeah, anytime you're in town, Mod of uh we should uh we should connect up, have you over to a game night. It'd be fun to jump into these conversations.

Madhav Nakar

Yeah, for sure. Thank you so much for having me.

Joshua Schmidt

It was a wonderful conversation. All right. You've been listening to the audit presented by IT Audit Labs. My name is Joshua Schmidt, your co-host and producer. You've been joined by Nick Mellum and Eric Brown here at the IT Audit Labs studio in St. Paul. And today our guest was Mudev Nikar. And uh, we'll see you in the next one. Please like, share, subscribe, and follow, and source us wherever you get your podcast content. Thanks so much for joining us. See you in the next one.

Eric Brown

You have been listening to the audit presented by IT Audit Labs. We are experts at assessing risk and compliance while providing administrative and technical controls to improve our clients' data security. Our threat assessments find the soft spots before the bad guys do, identifying likelihood and impact, where all our security control assessments rank the level of maturity relative to the size of your organization. Thanks to our devoted listeners and followers, as well as our producer, Joshua J. Schmidt, and our audio video editor, Cameron Hill. You can stay up to date on the latest cybersecurity topics by giving us a like and a follow on our socials, and subscribing to this podcast on Apple, Spotify, or wherever you source your security content.