The Audit - Cybersecurity Podcast
Brought to you by IT Audit Labs. Trusted cyber security experts and their guests discuss common security threats, threat actor techniques and other industry topics. IT Audit Labs provides organizations with the leverage of a network of partners and specialists suited for your needs.
We are experts at assessing security risk and compliance, while providing administrative and technical controls to improve our clients’ data security. Our threat assessments find the soft spots before the bad guys do, identifying likelihood and impact, while our security control assessments rank the level of maturity relative to the size of the organization.
The Audit - Cybersecurity Podcast
The Audit 2025: Deepfakes, Quantum & AI That Changed Everything
In this special year-end episode, Joshua Schmidt revisits the most mind-bending moments from The Audit's 2025 season. From Justin Marciano and Paul Vann demonstrating live deepfakes in real-time (yes, they actually did it on camera) to Bill Harris explaining how Google's quantum experiments suggest parallel universes, to Alex Bratton's urgent warning about the AI adoption crisis happening right now in boardrooms everywhere.
What You'll Learn:
- How adversaries are using free tools to create convincing deepfakes for job interviews and social engineering attacks—and why this represents a national security threat
- Why NASA shut down its quantum computer after getting results that "challenge contemporary thinking" (and the wild theories circulating about what they discovered)
- The critical mistake companies are making with AI integration: racing ahead without governance, security frameworks, or responsible use policies
- How the Pi-hole community exemplifies open-source security at its best—enterprise-grade protection at fractions of the cost
- Why IT teams saying "no" to AI isn't realistic, and what responsible AI adoption actually looks like
This isn't just a recap—it's a wake-up call. These conversations reveal the inflection points where standing still means falling behind. Whether you're a CISO, security analyst, IT auditor, or business leader trying to navigate AI adoption, these clips offer the perspective you need heading into 2026.
Don't wait until 2026 to realize you missed the critical shift. Subscribe now for cutting-edge cybersecurity insights that keep you ahead of evolving threats.
#cybersecurity #deepfake #quantumcomputing #AI #infosec #ethicalhacking #cyberdefense #2025yearinreview
Hi, I'm Joshua Schmidt, and welcome back to the audit. As we close out the year, I wanted to take a beat and look back at some of the conversations that really stuck with us. The episodes that made us think differently, the guests who challenged our assumptions, and the moments where we realized cybersecurity isn't just evolving, it's fundamentally transforming. This is our 2025 year in review. Okay, for our first clip, I wanted to share something from episode 72 of the audit that honestly changed how I think about hiring. We sat down with Justin Marciano and Paul Van from Valydia to talk about the deepfake workforce. And this isn't just some distant threat. This is happening right now. Now, towards the end of our conversation, they showed us just how ridiculously easy it is to create a convincing deepfake for a job interview. We're talking minutes, not hours. And the implications? Well, they're staggering. We're looking at national security threats, intellectual property theft, and entire industries vulnerable to infiltration. So settle in and let's take a look back at a year that proved one thing beyond any doubt. In cybersecurity, standing still means falling behind. Let's dive in. Logged back in here on the audit to um to talk with uh I don't know. Who are we talking to today, fellas?
Justin Marciano:Who do we have? Uh it's Justin Marciana here in uh in a different body in my roommate's body. Shout out, shout out Edward Masarro. Um sorry for putting you on the podcast here. Uh, but uh did give me permission to use your name, image, and likeness.
Paul Vann:Uh so here we are. And then we've got uh we've got me as Justin Marciano here, uh a live deepfake we prepared uh a little bit before this call.
Nick Mellem:That is really wild.
Justin Marciano:So yeah, honestly, as a great explanation, here's here's kind of two different versions, right? Paul's is a live deep fake that was pre-recorded. Um, we can stream live via uh, like if we wanted to actually do a live deep fake, we can. Um the purpose of what I'm doing or this actual product here is more for people that are on the road, on a ski lift. You can essentially just be in a controlled environment and you train a model on that. So that's what's running in the background right now through this camera and with the uh with the voice. And then on Paul's end, you know, you can legitimately produce real-time beepfakes nowadays where you know you take someone's face um and use an audio uh changing tool at the same time and have a conversation just like that.
Eric Brown:And Justin, is that tool called Pickle, the one that you're using?
Justin Marciano:Yeah. So I'm using Pickle and then uh Paul, what what tool did you use again? There's a there's a million open source ones.
Paul Vann:The the video that I recorded is actually fully open source. It's uh using Deep LiveCam. You can install it on your Mac and you can connect your webcam and in real time swap your face. Like I said, this one's pre-recorded, but yeah, we did this one live and just screen recorded the the live rendition.
Nick Mellem:Meaning as Elon Musk.
Justin Marciano:Oh yeah. There's a lot of there's a lot of videos on uh on X of people doing that, like a live screen with like as with his face, which has caused actually some pretty significant scams too. That's the reality of it.
Joshua Schmidt:The one that pickle that Justin's using, I could see how someone could use that today, and then maybe maybe they freeze it intentionally and just go, oh hey, my my screen's frozen or my my camera's frozen.
Justin Marciano:And uh so this is me in a similar environment, not the same environment. Give it a sec to to start the lip sync control. Um I probably filmed it right in this room, the same room. So give it a second, and then uh we'll be able to do the yep, lip sync is now back on. So yeah. A little bit wider of a mouth for sure, but it goes to show you can have different personas. It's supposed to be just of you for context, but you know, adversaries and and people who use technology for whatever purpose they want. So I uh got to use my roommate there too. Might get me banned from the platform, but it is what it is.
Nick Mellem:I just downloaded Pickle.
Eric Brown:Well, there you go, Paul. You switched it. Nice.
Paul Vann:Yeah, I actually just used uh like I so to create these deep fakes, you have to have a virtual camera. I was just able to swap my uh my virtual camera. It's pretty cool though. You can actually see that I can almost double up. Uh I can double up in a way and have the have a little bit here, a little bit there. Uh but uh yeah, that's you know, virtual cameras are are are fantastic. That's I mean, that's how people are creating these deep fakes today.
Joshua Schmidt:All right, we're on to clip number two. This is where things get real interesting and kind of nerdy. Bill Harris joined us to talk about quantum computing and what happens to all of our encryption when quantum becomes practical. Now, Bill's knowledge goes way beyond just the security implications. Now, I know what you're thinking. Parallel universes on a cybersecurity podcast. Just trust me on this one. I'll let Bill break it down.
Bill Harris:So I got one more thing that actually goes into philosophy a little bit here. So um I want to call out something that says that Google uncovered, uh, and then something that NASA uncovered. So Google was doing some quantum operations. Uh, this wasn't that long ago. I think it might have been last year. And there's a blog on their on their post about it uh that said they they got some unexpected results that they believe lend credence to the theory of parallel universes. And they put this right out there on their blog and and they write it, and they wrote a bit about it, which I found really surprising. But what was even stranger was that there is this story circulating. And the story itself, the the first part of the story is true. NASA shut down its quantum computer in February of 2024 because they got some unexpected results that they said challenge contemporary thinking. And they shut it down, and with no further comment, they were they began looking into it.
Eric Brown:Well, hold on a minute. Is that like the Catholic Church shutting down science in the 1400s because the sun is no longer the center of the universe? Well, not necessarily.
Bill Harris:So what people, it might be that, but you're doing what people are doing, right? People were wondering, well, what's happening here? So imaginations have really run rampant. And the stories right now range from something, you know, like, well, they just found, you know, some new math that they're trying to resolve, to they've stumbled into some alternate reality or they have uh stumbled across some type of extraterrestrial intelligence. Yes, like the movie Contact. Remember that?
Joshua Schmidt:Great flick.
Nick Mellem:Yes. I yeah, I had it sitting back here for like six months and I finally threw it out.
Joshua Schmidt:Get it out because I'm bringing it there. Here we go. Have you heard of the Mendala effect or the Mendela? I want to say yes, but go ahead. Hit us with it, Josh. This is going on, this is going on like wildfire over the last probably five plus years on the internet. Have you heard of it, Bill? I've I've not. Maybe it was the big host the what's the what's the particle accelerate accelerator in Switzerland called? Is it the big host of the large hadron colliding? Yeah, yeah. The the theory goes that CERN started messing with that. We slipped into a different timeline. So there's people that remember things inaccurately. If and and Nelson Mandela uh dying is one of those, where some people remember him passing away, and then some people remember him being released from prison. Another example would be the Bernstein Bears, the way it's spelt, the Bernstein Bears or the Bernstein Bears. Um, it goes into pop culture, uh verbal cues, um, like mirror, mirror on the wall from Snow White. It's actually magic mirror on the wall. So they're saying that we slipped into a different timeline at some point.
Eric Brown:I'm gonna use this in my next argument at home with my wife, right? Where it's like I'm constantly I'm like, yo, you're in a different uh dimension here. That's the best case.
Bill Harris:So maybe that's you know, lending some credence to uh to this yo, so it's funny you should say that because that's exactly kind of where some of these arguments are going. They're drifting off into quantum memories. Um and the theory that the theory that you actually never really die because at every juncture in your life, you like Schroeder's cat, you either live or you die. And so a version of yourself will live in perpetuity as it always junctures off. And so you'll die an infinite number of times, but there's always another one of you out there in some type of an alternate universe.
Eric Brown:Why did why did they shut it down if they found something interesting?
Bill Harris:They didn't understand the results. Um so they they um it the results that they got from their quantum machine uh did not correlate with their understanding of physics. Or they just didn't like what they got back.
Joshua Schmidt:All right, moving on to clip number three. Our next clip, we're jumping back in time to episode 75 with Alex Bratton from LexTech. Now, Alex came on to talk about something every business is dealing with right now: AI integration. And here's the thing companies are racing to adopt AI tools faster than they're thinking about the security implications. Alex breaks down why businesses need to pump the brakes and actually think through the ramifications of integrating AI before it's too late.
Alex Bratton:And that is the challenge that we've got right now because people are blazing straight ahead and rewinding a little bit, and I'll use the iPad as the example uh when it came into the business world. It was brought into the business world by business leaders who were like, I need this, I want this, and whether that was a doctor or a CEO, it didn't matter. And the technology teams were so far behind that it was it made it difficult for everybody. They couldn't figure out, okay, how does this connect to the network? How do I secure it? How do I do anything with it? And many IT teams today, at any company size, are doing the exact same thing. They're struggling. There are so many ways that we can empower teams, but is this tool secure? Did somebody just sign up for a tool that's free that's stealing all of our information? Which so that step number one is this the stepping back and communicating with the team around hey, what are our expectations? What is responsible use of AI? It doesn't mean here's the five tools you're allowed to use, or something we see very frequently in big companies is the answer to AI use is no. Well, that's not real. Come on. What are we gonna do with it? What does responsible mean? Number one, you have to understand what are the licensing terms of this thing we're using. What does it do with our information? And for non-geeks looking at legal agreements, that does that's not awesome. Um internally, we wrote a custom GPT to analyze that. So we could give it a tool name, it would go out, grab all the documents, bring it down and say, yes, they're gonna train on your data. And that that's the magic question. Um but the simple statement being if it's free, you're giving them your data. Forget it. Don't ever do that. If it's paid, okay, well then get somebody involved if you're gonna use it officially. But while you're tinkering, don't give it state secrets. That doesn't work. Um but taking those to the point where we know what's going on. Um now for me personally, again, coming in on the Vision Pro, um, one of the things I love about Apple is their privacy stance. I think one of the huge challenges, and I don't know that folks see it coming, is that when we couple AI with the company that hosts all of our emails and our documents and that makes money by selling advertising, that's a bad combo. And we have two megacorps that sit there at the center of that, that's dangerous.
Nick Mellem:You you lobbed it up perfectly, Alex. I'm was thinking about these things along the same lines. We're in a we need to, if organizations haven't gotten or stood up a governing body for AI, get a get a board of people, four or five people, whatever you decide on, and start deliberating on what direction you're taking with AI. Because it's coming, you gotta do it. You have to do it now because we run the risk of if we don't do that, your employees are gonna do it on their own anyway.
Alex Bratton:Chat OIT is real, it's gonna happen. Of course.
Nick Mellem:Yeah, we've seen it all the time through uh plugging proof point. You know, we see it at one of our clients that they're emailing each other back the their work and their personal, they're emailing back and forth. So what do you think they're doing, right? They're taking their information that they're working on at work and they're sending it to their LLM and their personal machine, and they're sending it back with their output, right? So we're they're using it already. Uh sometimes they might be using it uh, you know, on their machine. But so first off, it's you know, figure out the direction you want to go. Let's peel the hood back and figure out where you want to go, what we want to use it for, if you're gonna run it, uh, if you're gonna let if you're gonna use Claude, if you're gonna run on Cloud or if you're gonna run on-prem, right? How are you gonna do this? Um, and then you have to get policies and procedures out. You have to uh educate your staff, and then you have to train your staff on how to actually use it, right? It's one thing for us to say, oh, don't use state secrets, right? That's just a given. That's things that the three of us we just know that is a no-no, we're not gonna do that.
Alex Bratton:But what does it work really mean?
Nick Mellem:Yeah. What does it really mean? Exactly, right?
Alex Bratton:Yeah, totally agree. And it's helping them be comfortable with it. And again, for me, that's about how are we baking this into our culture? This is the new norm. This isn't as simple as, hey, we're just gonna hand everybody Excel who doesn't have it. It's it we're not giving you a tool. We have to change how we're thinking about work. Every time I'm about to do a task, how could AI maybe help me with this? And just giving simple guidance, not IT scary stuff of no, no, no tools. Like no, we have we have to embrace the whole company in surfacing this stuff. IT can't own all of it. Totally agree.
Nick Mellem:Totally, totally agree. I I I actually think organizations could run a risk of overprotecting this. You can tighten the bolts too hard. Um, and that and we run that balance in cybersecurity all the time, right? We people need to be function, they need to be able to function, right? I mean, if we're in a perfect world, we would just disconnect from the internet, right? And then we're good, right? In the cybersecurity world, that's not an option for us, right? So we got to ride that razor thin line of tightening the bolts too hard. And we're having the same problem, I think, with AI. I think a lot of organizations are gonna come in and they're gonna put too many policies, procedures around it. We should be deleting enough or we realize we need to add things back.
Alex Bratton:It's interesting when you hit on the potentially clamping down too hard. Um, that's something that we see very, very frequently in the big businesses in the Fortune 500 because they have very sophisticated IT, security, and data teams. So, especially over the past year or two, many of those teams have raised their hands saying, we own everything. Nobody touch it, nobody do anything. And unfortunately, many of them have been successful in wrapping their arms around it and kind of shutting everybody out. And which again has totally missed why does this technology exist? It doesn't exist for that group of people, it actually exists for the people who aren't the experts, aren't the geeks? Wait, I can just talk to something, I can chat with something. And so where we're seeing people be successful is actually focusing on the employees. Maybe it's an airline pilot or a flight attendant, maybe it's a salesperson. What would help this person do their job? What would empower them? What would take friction out of the way? And the part that again, the groups that are locking it down are missing is if we can identify the couple things we want to help people with, we can just trace a thin line through the back-end systems, the data, the policies. We can figure out that part first. And then we can do the next line and figure out that part. Instead, we're boiling two oceans. We're boiling the ocean of what's all of our data, make it accessible. Well, you know, to power something for this person over here, you might not even have the right data yet. But most folks say, hey, let's we have this, so let's take our our ocean and figure that out. And then let's look at the systems. How do we AI enable all of these systems and make them all accessible? I appreciate the thinking, but spending three years on that means you are gonna be so far behind. When I look at mid-market and smaller businesses, I love the passion of business leaders saying, we have to do this, hey team, let's figure this out. And they are moving so much faster, which is actually hinting to me that I think we're gonna see a lot of leadership positions in a bunch of different industries change.
Joshua Schmidt:Aaron Powell Is that a sea change in thinking of employees not just as another cog in the wheel to accomplish a task, but treating each individual like their own entrepreneur within the company? Um, is that kind of where you're coming from with that AI enablement?
Alex Bratton:Uh and that's an interesting way to put it. And that that that is my core belief is that things like AI should be giving people superpowers. We should be helping that one person be 10 or 100 times more effective, not how do I implement AI so I can fire my whole team? I think anybody doing that, number one, you're focused on cost cutting and that's not good for growth, that's not where you grow. Number two, the great ideas of where AI is going to transform the business come from those people. And they're the ones that have the idea. So once they get comfortable and they can lean in and have that entrepreneurial always learning, hey, what if? When they start to ask the what if at times 10 or 100 or 1,000 employees, that changes everything.
Joshua Schmidt:Seems like it started with an ultra kind of an altruistic stance. I mean, baked into the name, right? Open AI. It started out open source, I believe, but now we have this kind of AI arms race going on with nation states. Um, how do you see that squaring off with kind of it seems like things should be moving more to open source if everybody's enabled to kind of be their own entrepreneur and um everyone has Leonardo da Vinci and Albert Einstein in their pocket? How do we get from where we are in this kind of like keeping things close, but allowing people certain liberties? And how are we going to balance all that moving forward?
Alex Bratton:That's an interesting one. Um I don't know that it necessarily needs to go open source. And I think again, back to what we the humans control, we control being great communicators, we control coming up with the ideas and framing what it is that we want the AI to do. Um and then even, hey, here's the process. When we're building things, for example, even in the agentic world, I mentioned Chat GPT earlier. You know, it's a simple go-to. But you know what? If I'm writing code on my Mac and I want to be having it generate different types of code, I might be pointing at different systems. So the the simple statement of not ever being locked into a single vendor is more important now than ever in technology. We have to be crafting things in such a way that there's flexibility there.
Joshua Schmidt:And on to our final clip. This comes from episode 65 with Dan Schaefer and Adam Warner, where we talked about Piehole. Now, Piehole is a great network-level ad blocker. You may have heard of it. But what really struck me about this conversation wasn't just the tool itself. It was what Dan and Adam showed us about how tech communities can come together. This is open source at its best. People who care about security and privacy, building something valuable, sharing it freely, and creating a whole community of like-minded folks who just want to take back a little control over their digital lives. Here's Dan and Adam on what makes the Pi Hole community so special.
Eric Brown:Like we were talking about earlier, this is really enterprise grade technology that is at fractions of the cost. So you could you could bring it in and plug it into a network and it doesn't consume much resources at all. And a really big shout out to the pie hole community because there are some people out there that are curating and generating lists that we all then consume and use. And the lists are up to date. I think some of them are updated. Daily, if not more frequently, with really new and emerging malicious uh sites. And we can consume that, and then we're just as safe as an enterprise organization.
Joshua Schmidt:Adam, maybe you could speak to just the value of the community and how how those people have been generating those lists and how you into how you integrate that information into what you and Dan are working on.
Adam Warner:Every block list that's out there is community maintained. We don't have we don't have an opinion. So as as as the software itself, we we don't care what you're blocking. You can block as much or as little as you like. Um it's really it's up to you how you use it. Um so when we um initially install just to make sure it works and just to sort of lower the barrier to entry, we have one suggested list um which we found works quite well, doesn't block too much, doesn't break a lot. Um and that's just there to get sort of people started. But uh yeah, there are, I mean, certainly on Reddit, there's a guy, uh forget his name, Wally3K, um, who uh maintains a list of lists. So not not just his own lists that he puts together, but he also um I think he goes through and and kind of optimizes uh a few other people's lists. Um Firebog.net, I believe, is is where he keeps those. Firebog. And then you've got that there's just there are so many people out there that are just coming up with the different things.
Joshua Schmidt:One list to rule them all, it sounds like. And there you have it, just a small sample of incredible conversations we had in 2025. If you enjoyed this look back, make sure you're subscribed so you don't miss the thing we have coming in 2026. Because trust me, we're just getting started. You can find us wherever you get your podcasts. Check out our YouTube channel for video episodes and shorts, Spotify for video and audio, and connect with us at ITAudit Labs.com or on LinkedIn for more cybersecurity insights. Until next time, stay curious and stay secure. This is Joshua Schmidt with the audit, and we'll catch you in the next one.
Eric Brown:You have been listening to the audit presented by IT Audit Labs. We are experts at assessing risk and compliance while providing administrative and technical controls to improve our clients' data security. Our threat assessments find the soft spots before the bad guys do, identifying likelihood and impact, or all our security control assessments rank the level of maturity relative to the size of your organization. Thanks to our devoted listeners and followers, as well as our producer, Joshua J. Schmidt, and our audio video editor, Cameron Hill. You can stay up to date on the latest cybersecurity topics by giving us a like and a follow on our socials, and subscribing to this podcast on Apple, Spotify, or wherever you source your security content.