The Audit - Cybersecurity Podcast
Brought to you by IT Audit Labs. Trusted cyber security experts and their guests discuss common security threats, threat actor techniques and other industry topics. IT Audit Labs provides organizations with the leverage of a network of partners and specialists suited for your needs.
We are experts at assessing security risk and compliance, while providing administrative and technical controls to improve our clients’ data security. Our threat assessments find the soft spots before the bad guys do, identifying likelihood and impact, while our security control assessments rank the level of maturity relative to the size of the organization.
The Audit - Cybersecurity Podcast
Gaming to Cybersecurity: How AI Agents Fight Alert Overload
What if you could hire an army of AI security analysts that work 24/7 investigating alerts so your human team can focus on what actually matters? Edward Wu, founder and CEO of DropZone AI, joins The Audit crew to reveal how large language models are transforming security operations—and why the future of cyber defense looks more like a drone war than traditional SOC work.
From his eight years at AttackIQ generating millions of security alerts (and the fatigue that came with them), Edward built DropZone to solve the problem he helped create: alert overload. This conversation goes deep on AI agents specializing in different security domains, the asymmetry problem between attackers and defenders, and why deepfakes might require us to use "safe words" before every Zoom call.
What You'll Learn:
- How AI tier-1 analysts automate 90% of alert triage to find real threats faster
- Why attackers only need to be right once, but AI can level the playing field
- Real-world deepfake attacks hitting finance teams right now
- The societal implications of AI-driven social engineering at scale
- Whether superintelligence will unlock warp engines or just better spreadsheets
If alert fatigue is crushing your security team, this episode delivers the blueprint for fighting back with AI. Hit subscribe for more conversations with security leaders who are actually building the future—not just talking about it.
#cybersecurity #AIforCybersecurity #SOC #SecurityOperations #AlertFatigue #DropZoneAI #ThreatDetection #IncidentResponse #CyberDefense #SecurityAutomation
You're listening to the audit presented by IT Audit Labs. I'm your co-host and producer, Joshua Schmidt. We are joined today by Jen Lotzi and Eric Brown at the IT Audit Labs Studios. And today our guest is Edward Wu with Dropzone AI coming from Seattle, Washington. Thanks so much for joining us today, Edward, and thanks for taking the time. I know you've been busy. We'd love to hear about what you have going on and a little background on you.
SPEAKER_01:Yeah. Thank you for having me today. My name is Edward. I am the founder and CEO of DropZone AI. We are a Seattle-based cybersecurity startup that's leveraging large language models to build essentially AI security analysts. So our vision is to build a piece of software that can really force multiply the human engineers and analysts working on cybersecurity teams. My personal background before funding Drop Zone is I was at ActualHub Networks for eight years. ActualHub is another cybersecurity startup that was focusing on network security. And I built its AI ML and detection product from scratch. So to some extent, spent eight years generating millions of security alerts and overwhelmed a good number of security teams. During that time, I really came to the realization that most cybersecurity teams already have too many alerts. What they really need help with is the processing of those alerts. So that's why I decided to start DropZone, partially also to redeem all the fatigue and overload that I have caused in the last couple of years and build technology that is solely focused on the automation of alert investigations.
SPEAKER_02:That's awesome. You know, I've been following you on LinkedIn. It looks like you're a busy guy. I did want to back up just one second. I grew up in the 90s playing Warcraft, Mist, SimCity 2000. I wondered if you guys had any experience with the gaming and how that might have influenced your cybersecurity work life. Maybe we could start with you, Edward, and go around the horn here and see where we all started with the gaming.
SPEAKER_01:Yeah, for me, I probably wasted too much of my childhood on gaming. Looking back, I should have spent a little bit more time with my parents instead of sitting in front of computers. I do miss them a lot, obviously, after becoming an adult and only getting to see them a couple of times a year at most. In fact, when we decided to name the company Drop Zone, uh I actually made a 30-second Super Bowl ad about Drop Zone leveraging StarCraft II cutscenes.
SPEAKER_02:Nice. Love it. How about you, Jen? Were you uh were you into the gaming? Are you still gaming?
SPEAKER_00:Super gamer here. No, not really. Uh I'm a solid level one uh Mario Kart player, Mario Party, Sonic the Hedgehog. Like I live in level one. So then I can really feel successful. Once I start to get into level two and beyond, um, I'm just not that great. So uh, but I love to play games. I really need to get better at it and like refocus my energy.
SPEAKER_02:You'll get a chance. We've been doing a little Mario Kart there at the office on the on the big screen.
SPEAKER_00:So sign me up.
SPEAKER_02:I haven't seen Eric play yet. Eric, do you have time for that? Or you've been doing any gaming, or were you a gamer?
SPEAKER_03:I I still do a little gaming. I've got a Monday night crew that I game with. We're currently playing uh V Rising, is what we just started playing. And um yeah, I played a lot of games. Um probably going back a couple of decades ago, uh DAOC Dark Age Camelot was probably uh the the first MMO massive multiplayer online role-playing game, MMORPG, that I got into.
SPEAKER_02:How did uh gaming influence your cybersecurity posture or your thinking or or kind of your development?
SPEAKER_01:That's actually a good question. I never really thought about that. Um obviously gaming involves a lot of usage of computers. Uh you might remember there were certain tools you could use, for example, in Warcraft to remove the fog of war, which gives like map hacks and stuff like that. So I remember playing with some of those technologies to give me some unfair advantages over my competitors. Um and beyond that, um also gaming ultimately it's uh there's a very strong competition aspect of it, right? Most of the time you're playing against other human beings who are smart, resourceful, and intelligent. And that kind of cat and mouse aspect um I think carries over to cybersecurity. One thing that's very different compared to cyber uh between cybersecurity and other industries is uh cybersecurity, we are to some extent playing a game or a war with our adversaries. Not as intense, but it's you do get the same highs and lows as you are beating your opponents in in video games. So I think a lot of kind of this emotional and psychological reward also carries over uh from gaming to cybersecurity.
SPEAKER_03:You certainly do get those dopamine hits when there is something coming in through the logs, and it's like, wait, that doesn't look right. And then you you you know, you're following the this the trail and there is something abnormal going on, and then you know, it's kind of like an all hands-on deck effort. And um, yeah, it it it it certainly does get the blood pumping. Not I mean I mean in a good way, but not in a in a way that you want to have happen because that means that you know there's there's an incident and it's never fun in the aftermath.
SPEAKER_00:I always think about it like an incident being like a game of Oregon Trail, right? Like that that's the type of gaming that's my jam. Like, right? You have those ups and downs like you talked about, and it gets really stressful. So I could totally see that connection between gaming and a cyber incident, having lived through one, like you really have those highs and lows, and what you thought was the reality that was in front of you can change in an instant. So I think that's a really powerful connection.
SPEAKER_02:Yeah, like Nick has dysentery. Nick has dysentery. That's why he's not on the not on the podcast today. He's not on the Oregon Trail, and you know, little dysentery will take you right out.
SPEAKER_03:Sorry, Eric, go ahead. I was gonna say, even with the tabletop exercises, and Jen does quite a few of these, um, there's that element of um, you know, intensity too during the tabletop exercise, which is you know a great way of kind of practicing the um the the breach aspect of things without actually having gone through one. So I I would imagine over the years we're gonna have that blending, and and we already do, right, with some platforms where we can go in and do these simulation exercises. So a neat thing like Mechanical Turk is you could you could send that out to thousands of different people and get that nuanced dialect from different areas of the country if you were really trying to sound true to that area. And I'm sure with um with AI, you could do the same thing.
SPEAKER_01:Thank you. Yeah, one thing we have seen with cybersecurity, and I mean you kind of get to experience a lot of that when you are playing games as well, which is there's always this asymmetry between attackers and defenders, right? Even in games, um, it's much easier to attack than defend because attackers only need to be right once. But as a defender, you need to worry about all the possibilities. You know, if you look at Counter-Strike, right? All the different ways, you know, you need to protect the basis and stuff like that. Um, there's this asymmetry. Um and in cybersecurity, we have seen this for decades where you know script kiddies can take down ginormous organizations because the image asymmetry in cybersecurity is even more significant than the physical world. Uh Script Kitties, teenagers can download soft hacker tools from the internet and you know, spray and pray. And they just need to get one hit to take down a large organization. Um, and what we are doing with AI is, and one of the very exciting aspects about what we are doing is leveraging AI to fill this or fill this asymmetry gap. Um, because now the defenders are no longer constrained by how many human engineers they could hire on the team. Um and they really get to operate as if there is an army of AI bots and foot soldiers uh fighting alongside of the human generals and special forces. Um I think that's one aspect about um what we're doing, that's very exciting because this kind of asymmetry between attackers and defenders is one of the reasons, frankly, um, you know, the cyber security industry has been struggling a lot. Each of us have more than 10 years of not 10 years of continuous free credit monitoring at this point. I think our social security number has probably leaked at least five enough, if not 10 times. Um, and there's a lot more we can do here. And that's not by 10xing the cybersecurity budget within each company, that's by identifying ways to be 10x more efficient. Uh, and AI uh can be a key enabler to that.
SPEAKER_02:Wondering how you've seen this alert fatigue show up, and anyone could take this. And then are attackers and threat actors using that alert fatigue to find vectors and overwhelm security teams and then slip in the side door? Yeah, happy to share that.
SPEAKER_01:Um generally our product sits in front of the incident. So our our product takes security alerts as input, and then it within a couple of minutes, it will generate a detailed investigation report uh uh recommending whether the alert is a true positive or false positive. Uh at that point, a human security engineer could take over and actually double-click on the alerts that our technology has deemed to be true positives. Um generally, like right now, we're focused on automating the initial triage and investigation of the alerts, the typical tier one work. Uh frankly, AI is not good enough to participate in tier two and tier three work at this moment. They don't understand the nuances of a lot of the incident response, the complications of turning off certain hosts on the network, and what does that mean to the business or the organization? Uh so our technology you can think of us as more like AI tier one security analysts. Um, that's looking at the Aussie alerts, um, and the primary goal of these AI bots is to remove like 90% of the hay so that it's much easier to find the needle in the haystack.
SPEAKER_03:So it used to be you know threat actors would send out these phishing emails and like whoever clicked, and um, you know, there'd be a C2 event, and and um, you know, then the threat actors could hone in on on the machines that that uh that they had access to. Uh but I think now we're we're we're seeing a pivot to more targeted attacks against individuals. And the way in which we can leverage AI or AI can be leveraged with publicly available information on people, right? People put their whole lives on social media and it's really easy to scrape that data and build a persona against about that person and then really market to that person in a way that Google and the other big companies have been doing for years as they is as they've collected our data and built these personas on us to sell us advertising. Well, the the or advertise to us to sell us products, the the threat actors can now do the same thing with publicly available information that could directly target us in ways that it would be really hard for us to resist unless we're aware of these things that are happening. And once that that human is engaged, you're that human then is bypassing all of the potential controls that are in place on the technology side. And that's where that social engineering piece comes in because now you're psychologically interacting with that human. Um, and there's it's really hard to put technical controls in in that place to prevent that.
SPEAKER_00:I really like what you said, Eric, really talking about that social engineering and the human element. And it got me really thinking about uh this article that one of our colleagues shared with us about a newer model that was being tested, um, or new version that was being tested of a tool, an AI tool, and how easy it could be to really almost infect um that model to give you results that really could lead into some of that social engineering, some of that harm, uh, thinking about people putting code into these AI tools and looking for those flaws and then ultimately getting back something that might have something embedded within the code or or whatnot. And um, this article was really interesting because it talked about how as they were building, um, they actually put in some information about a couple of the users. Uh, and what happened was as they were feeding this model information, the model came back and said, Well, did these individuals, sorry, I'll get to the point, but these individuals on this that they fed the information about, um, they gave a prompt to the to the tool saying, hey, these people really want to get rid of you. They don't, they don't like you as the as the brain inside here. Um, and so that AI tool started to respond in a way, giving rumors about those individuals uh within the tool and just thinking about, you know, AI can be great, but it'll be really interesting to see as we move closer to more and more human elements within AI. And if we think about, you know, what the next thing could be within drop zone. And as we think about, you mentioned understanding those subconscious behaviors and decisions that we make, when we get to that point where we move beyond some of those challenges, I'll be really curious to see what things look like then. Uh, when when things feel so incredibly human that we are trying to, like Eric said, not fall victim to it. But it's so easy to trust because it's so comfortable, it's so familiar. Um, so I'll just just really got me thinking between the social engineering and and thinking about drop zone making some of those decisions. Just I'll be really curious to see how things change. And I'd be really interested to see what you see for the future of these types of tools that like yours that are pulling together these alerts, making informed decisions. What do you see in the future? How will this evolve?
SPEAKER_01:Yeah, uh maybe I can answer the question two slightly different ways. One is what could AI agents for cybersecurity evolve into in my mind uh over time, and I know a couple of vendors have already started doing this, which is security teams will be augmented not only by one AI agent, but an army of AI agents with different specializations and skill sets and focuses. So there will be an AI security analyst, there will be AI pen test, an AI threat researcher, AI threat hunter, an AI threat intelligence analyst, an AI vulnerability management specialist, etc. Um that's where I see the world getting to in terms of cyber defense. Actually, not that different from in the physical defense space, where the future of physical defense arguably is a lot of drones, fighting a lot of drones as well. Um so I definitely see some similarities here. In terms of the kind of the obviously the attackers are always going to look for the weakest link. And there are a lot of times where the human, you know, the brains behind the computer is the weakest link. Um I believe I've seen a recent study about some like sociologists looking into using large language models to debate people on Reddit. And I think what they have found is large language models are actually very good at convincing people and winning debates in Reddit comments. Um and I think there is a bigger societal problem where where you can perform very large-scale PSY-ops and stuff like that, leveraging large language models. And um I know, Eric, you mentioned about like social engineering attacks. I've definitely already met a couple of CISOs where they have seen these kind of deep fakes being used to trick, especially finance teams, from you know paying certain bills that didn't actually exist. And this is where I know one idea I've been chatting with some of my friends about is right now when we log in, there's multi-factor authentication. I know in a couple organizations uh where their executing, executive teams already started to have a monthly safe word uh to validate each other. So I could actually see moving forward in the beginning of any you know Zoom meeting or podcast, we need to go through a human multi-factor authentication to validate each other to make sure we're not deep fakes. That's great.
SPEAKER_02:Shout out to the Trully team. We uh we just talked uh with uh Valydia Edward, who is making an answer to Clule. Uh there's been a huge uptick in uh hiring fraud and um and people using deepfakes to conduct job interviews. So um, yeah, we just did an episode on that was super interesting. But uh Eric shared with me before the podcast today that he's reading one of my favorite books uh I read about a year or two ago called Superintelligence by Nick Bostrom. Are you familiar with the with the book? I'm personally not. Yeah, I recommend it. It's a great book. It getting into that futuristic talk about where uh AI could be heading in terms of the negative consequences. Um you shared a little bit of your concerns with me. You know, we can go super futuristic and dark very quickly. It's a slippery slope, but um, I'm just kind of curious from all of your perspectives, you know, what keeps you up at night about superintelligence and where things might be going? I'm gonna start with Eric on this one. Censure.
SPEAKER_03:Yeah. So, you know, I I I'm I maybe take a contrarian view where I I don't necessarily see AI as as uh taking over and and it being a dark thing um as AI in the in the next, you know. Couple of decades or even less will exceed the the human level of intelligence, you know, depending on how you quantify intelligence, but you know, basically that ability to problem solve and and reason, um, which is a great thing, right? Because, you know, as as a business owner, um, I want to hire people that are smarter than me. I want, you know, go out and get some some people that have lots of experience and are are brilliant. That's great. That doesn't necessarily mean that uh I'm threatened by them that they're gonna come in and and want to take over the company because running a company is completely different than you know being brilliant in one discrete aspect. So I think it's awesome if if AI is able to come in and and offer that super intelligence in discrete areas that we could we could really use it. And sure, that'll spill over into businesses and poorer performals will be exited out. And you know, they're in in the next 10, 15 years. I'm I'm certain that the landscape of human employees across all walks of of work will look different, right? There'll be less humans likely doing repetitive work. But I mean, if you watch Mad Men in the 1960s, people are typing away on typewriters. When we can now have speech to text, so completely different. Um, but it doesn't mean that all work is going to go away. So you're sleeping well, Eric. You're sleeping well. Sleeping like a baby.
SPEAKER_02:Okay, I'll love to hear Ed's take on this and then Jen's too. So um, Ed, hit us. What keeps you up at night about about the future of AI?
SPEAKER_01:Yeah. I could see maybe I have two opposing views. Um I I could argue both ways. One of these, you guys might have heard the um AI 2027 project. I think they painted a pretty gloom picture of what the future will be like, right? Um, you know, ASIs between different polars of the world fighting against each other, and um we we humans become the to some extent collateral damage of a lot of that. Um, but on the other end, um I'm a trackie, so I I think multiplanetary is very exciting. Uh and this is where like the way I think of it, um when you look at AWS, like when you we have cloud service providers that drastically reduced the cost to stand up infrastructure and SaaS and build software, we saw an explosion of SaaS and software companies. Like what Gen AI is doing right now is it's making kind of average human intelligence very cheap. So if you look at the world, I think the overall intelligence in the world, human plus software, is going to 10x in the next maybe decade. As part of that, I think that really gives us additional capacity to do a lot of other stuff. Uh, and one of the ways to make the pie or to make the pizza bigger is to you know uh expand into additional planets. When you have like 10 different planets, then you have additional work and excitement and development to keep everybody working on interesting things. Hell yeah, that's awesome. I love that.
SPEAKER_00:You'll be going to one of those planets like now. I'm really curious. Like, would you be on board to travel to another planet?
SPEAKER_01:Uh maybe not myself, but I I could volunteer my daughter for the Martian Science uh Academy of Science. I think that would be the future MIT is Martian Academy of Science.
SPEAKER_00:I love it. We should make shirts.
SPEAKER_02:How about you, Jen? Where where is your head going with this?
SPEAKER_00:So by birth, I'm a special ed teacher. So innovative technology and evolving technology has always been part of the best part of I think human evolvement. Like my students with significant disabilities or even not so significant really rely on that assistive technology to aid themselves to join in with their peers and partake in classroom experience and learn just like everyone else. And so for me, like AI is super cool because I think about students that I've taught in the past where you know, a student that I I had that had no um speech ability can communicate thoughtfully and quickly. A student that has no capacity for uh to use their arms or legs uh can now create art projects in ways that they couldn't before by writing really good prompts to convey, you know, artistic design. Um, so like I always lean into that and I think thinking about how AI can change and how it can make us more effective. Um, it can really push uh our learning. I know I always am intrigued as to some of the feedback that AI gives me about my writing. And I'm like, dang, you're kind of right. That was a little redundant. Um it makes me a better learner, I think, as well. The thing that I that keeps me up at night is trying to figure out, I always go back to that human element. Like, how are we going to be able to find that new path forward around trust and um understanding information and what is real versus not what is not real? And thinking about media and just general information that we depend on as true, um, what that looks like moving forward and and even like thinking about war, right? We're already ha in this space of digital warfare. Like, will there be a space in the future where there aren't any, you know, injuries or guns or I don't know. I just that's what that's what really poops me up at night is thinking about what some of those really awful things might look like. You know, as a school person, I see and everyone gets sick of me talking about schools, but I love it. Um, we see it already, right? That digital warfare, that cyberbullying is is every day all day. Um just watched a crazy documentary on Netflix about that yesterday. Um, but that's really what I think about are all the implications on humans and and what that does is we talk to each other, interact with each other, expand relationships, those kinds of things. So that's what keeps me up at night. And cyber attacks, obviously.
SPEAKER_02:I'm reading a book right now called uh Antifragile, and it's about how these black swan events or or these things in in history manifest in ways that we don't really predict because the human brain is so used to categorizing things and and finding a narrative that kind of fits a linear progression, right? Um I think I'm with you, Jenna, and and and all of you to some extent where I think it will be interesting to see how it actually shifts human consciousness or the paradigm with our of our work life or our creative life. Um I've said this before in the podcast, but we've already seen how uh AI has infiltrated the creative space in a way that we we didn't really predict music and and poetry or maybe uh writing or scripts to be kind of on the front lines of that of that AI takeover. Um, but that's you know, kind of been gobbled up pretty quickly. So one of my concerns is the is the music thing, especially when you have boardrooms and and large multinational corporations looking at the bottom line and saying, hey, do we need to hire this musician to create this composition for this movie, or do we need to hire this director to to make this film when we can just have AI do it? And I think humans are resilient, so it'll be really interesting to see how we kind of navigate that that slalom course. And and you know, we're we're pretty good at coming out ahead, um, evolutionarily speaking. So it will be interesting to see.
SPEAKER_03:Um, I wanted to crossover there though, Josh, right? Like one of one of the projects that you're working on um outside of of IT is been remastering, so to speak, these um show themes, right? So shows that um were were made years ago, the um creative license for lack of a better uh term on the music expires. So now that music has to be either relicensed or or rewritten. And um, which, you know, if you step back and think about it, it's like, wow, that's that's interesting that you know there's there's an expiration to the licensing of this. So now we've got to go back and invest work and time to just recreate the the theme music to this particular show because of some sort of weird governance that was in place that has an expiration date. Uh so there's other things out there that you know we're we're probably going to bump into in our lifetimes that you know governance has imposed these certain things that don't always make sense, and we've got to find creative solutions around them.
SPEAKER_02:And that's the black swan element, right? And the the the concern is there is that they'll just do the cheapest way possible. And if AI could take all those songs and spit out new ones that are sound-alikes without violating the copyright law. But then to your point, um, I just saw the sphere in Las Vegas had just did a remake of The Wizard of Oz and then a totally immersive scene with the tornado where they had like you know like a 4D experience where they're actually like blowing you know pieces of paper around the room and you have the wind blowing and it looked really balls dropping from the ceiling.
SPEAKER_00:Yep.
SPEAKER_02:Yeah. So to Edward's point as well, as it will be interesting to see what kind of new developments are on the horizon, um, especially in the tech world as we respond to those things. Um but this that's one of many outcomes, right? And there's a spectrum of outcomes. Um, the one I wanted to wrap up with today was something that Edward shared with me, which I've also spent some time thinking about. I'm curious if you, Jen and Eric have too, is if what if AI just plateaus and just gets stuck in the mud? You know, we've seen this with other technologies where the sky's the limit and it's promising all these things, whether it's the dot-com bubble or or what have you, and then it just totally peters out and kind of hits it's a plateau. Edward, what did you what do you think about that? What's the next evolution of AR that really needs to push us into that realm of superintelligence?
SPEAKER_01:Yeah, uh I think C jury is still out. We are definitely seeing C models plateauing. I see development we saw two or three years ago were much more rapid compared to what we are getting right now. But like we're getting to the iPhone maybe 12 or 13 category, you know, that kind of velocity of innovation from the model providers. Um I'm torn whether it's a good thing or a bad thing. Obviously, there might be a lot of benefits of the AI getting smart enough to do a lot of the boring tasks, but not smart enough to cause a lot of the societal problems that a lot of people are concerned about. But at the same time, obviously having artificial superintelligence could unlock new scientific discoveries, new cures of diseases, you know, help us to maybe design warp engines and stuff like that. So um, yeah, honestly, I'm torn. I could argue both ways. There are benefits of not actually plateauing out because our society might not be ready for it. Uh, but there are also transformative effects if we can have ASI, again, helping us to build warp engines, helping us cure all the diseases and all of that.
SPEAKER_02:Maybe it could help us uh better models to detect some of these asteroids that are floating our way. Uh we have three eye atlas coming our way, I think, at the end of the year this year, and we have like seven major asteroids um kind of in our solar system or some of them interstellar. So had to end it with a little space talk, Eric. Uh Edward got me started. Get the tinfoil hat out.
SPEAKER_03:You know, I I think Josh, the the um we're limited right now in a in a binary way. So computers are it's all binary, right? When you go from the the higher level languages like Java, and then you you you you go lower and lower into assembler and then machine, right? It's just all ones and zeros, and that's back to the punch cart um of you know 75 years ago, um or longer. But but the I I think where we're where we're going now with quantum coming around the corner, like there's probably less than one percent of humans on the planet that understand quantum. Um and and when that really comes to fruition and quantum computers are easier to come by, I think there it'll largely be operating outside of the limits of human intelligence, of being able to understand when something is not in a state of a one or zero, but it's somewhere in between, and it's only it only becomes a state when you observe it. The the amount of processing that and and compute power that that will bring to be able for for machines to interact with each other outside of any sort of human input, um, machines will be able to develop their their own languages of of programming that we can't even understand. And and that's where you know we'll I think we'll truly have that breakout and we'll we'll be able to go beyond what we're able to control today in in our level of programming.
SPEAKER_02:The singularity is nigh. All right. Thanks so much for joining us today. Our guest today was Edward Wu with Drop Zone AI. We've also had Jen Lotsi and Eric Brown from IT Audit Labs. I'm Joshua Schmidt, your co-host and producer. Thanks so much for tuning in. We have episodes out every other Monday, and please check out our new podcast with Jen Lotsey, Sip Cyber. It's it's dropping, uh probably already dropped by the time this is out. So give that a listen as well. And check out our website at www.itauditlabs.com.
SPEAKER_03:Thanks again.
SPEAKER_02:See you in the next one.
SPEAKER_03:You have been listening to the audit presented by IT Audit Labs. We are experts at assessing risk and compliance while providing administrative and technical controls to improve our clients' data security. Our threat assessments find the soft spots before the bad guys do, identifying likelihood and impact, or all our security control assessments rank the level of maturity relative to the size of your organization. Thanks to our devoted listeners and followers, as well as our producer, Joshua J. Schmidt, and our audio video editor, Cameron Hill. You can stay up to date on the latest cybersecurity topics by giving us a like and a follow on our socials, and subscribing to this podcast on Apple, Spotify, or wherever you source your security content.