The Audit - Cybersecurity Podcast

Cognitive Surrender: How AI Weaponizes Human Psychology

IT Audit Labs Season 1 Episode 84

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 43:28

A $25 million wire transfer. A fake CFO. An entire executive team that didn't exist. This is what modern cybercrime looks like — and your firewall won't stop it. 

In this episode of The Audit, co-hosts Joshua Schmidt, Eric Brown, and Nick Mellum sit down with James McDowell — forensic psychology expert, cybercrime researcher, and adjunct professor at American Military University — to explore the chilling intersection of AI, human psychology, and cybercrime. James introduces the concept of "cognitive surrender": the slow, dangerous transfer of our thinking to AI tools, and how threat actors are exploiting it at scale. 

What You'll Learn: 

  • What "cognitive surrender" is and why it's cybercrime's greatest accelerant 
  • How a $25M deepfake scam bypassed every red flag a trained employee had 
  • The psychology behind System 1 vs. System 2 thinking — and why attackers time their strikes around your lunch break 
  • Why voice passwords and family code phrases are becoming critical security tools 
  • How FraudGPT and dark-web AI models are lowering the barrier for cybercriminals 
  • What James's wave theory reveals about how we trust — and how that trust gets exploited 

📖 Guest: James McDowell Forensic psychologist, cybercrime researcher, and author of Forensic Psychology and the Human Side of Cybercrime. James teaches at American Military University and leads research at [Research Institute] focused on the psychology of cyber offenders and victims. 

📚 Book available on Amazon and Routledge. Search: Forensic Psychology and the Human Side of Cybercrime 

Don't wait until your organization is the next headline. IT leaders need to stay ahead of evolving threats, and this episode delivers the psychological intelligence to help protect your business. Like, share, and subscribe for more in-depth security discussions! 

#cybersecurity #cybercrime #socialengineering #deepfake #AIthreats #infosec #phishing #cyberpsychology #ethicalhacking #CISO 

Cognitive Surrender And AI Trust

James McDowell

You're passing your thinking on to these AI tools. And I really like that term cognitive surrender because it it is us giving up a lot of our thinking and and kind of what made us human.

Meet The Hosts And Guest

Joshua Schmidt

Welcome to the audit, presented by IT Audit Labs. My name is Joshua Schmidt. I'm your co-host and producer, and we're joined by the usual suspects. Eric Brown, managing director and security engineer and director, Nick Mellum, coming from Texas. And today our guest is James McDowell, and he's coming from Tennessee, yeah? Yeah, absolutely. We were talking before this, doing some pre-production. Our conversation started to gravitate towards AI, weaponizing human psychology in unprecedented ways. And uh you had mentioned uh a few things that were really interesting to us in that regard because we've been focusing a lot on AI at IT audit labs, integrating that into a lot of our workflows and in even some upcoming projects. So, yeah, maybe you could give us a little background on yourself and um what's top of mind for you today.

James McDowell

I think really importantly, like uh, you know, I bring an attitude to cybercrime investigations that really we're integrating psychology into our investigations and not only from the victim-centered response angle on how we should be treating victims or or survivors of cybercrime uh with more compassion than I think is typically treated. But then also a different lens in how we can think about offenders in in psychology. And so uh been doing that for about 10 years now, uh really enjoyed it. A lot of different experiences, as you all know in cyber, it it changes every day. Um and then yeah, adjunct professor, so working with the the next generation, and and it's always fun to see them uh delightful moment when when it clicks for them as well.

Nick Mellem

James, I I spent time in the in the military and uh one of my best friends coming out of the military, he went a psychology path. And he now is a psychologist for the BA, amongst many other things. I'd have a lot of deep conversation, well relatively deep conversations about, you know, veterans, um, but then also, you know, people in the work, burnout, et cetera. Is there any trends that you're seeing now that are that are maybe new within the last five to ten years? You know, we got out of a really long, extensive war in the Middle East, and now we're shifting back in. If you've been paying attention to the news over the past couple days, cybercrime is up, right? All these things are happening. If you tie all that together, is there any trends from your perspective, maybe from your teaching um past and present?

Psychology In Cybercrime Investigations

James McDowell

Yeah, absolutely. I think a really interesting trend, and and it's not my term, but cognitive surrender is what we've seen in the last five years with the growth of the AI tools. You're passing your thinking on to these AI tools. And and I really like that term cognitive surrender because it it is us giving up a lot of our thinking and and kind of what made us human up until a lot of these AI tools got really popular a few years ago. And I think that's a really interesting trend that we're only going to see more of. And then as you think about that in the context of cybercrime, like are we seeing a bigger uptick related to that? And I think there is correlation there, maybe not causation yet, but I uh not only are we seeing the scale explode with AI tools, but how we respond to the scale of those instances.

Eric Brown

So, James, along those lines, and I I know I was perusing your um your book a little earlier, your book's uh Forensic Psychology and the Human Side of Cybercrime. You talk about a framework in there called the wave theory. And could that wave theory um be a framework to look at this at scale from a psychological perspective? Or how would how would you relate the two?

James McDowell

Yeah, absolutely. You're you're right on the money and appreciate the plug there for the book. Um, but yes, wave theory is the first attempt at unifying um how we think about cybercrime from the human aspect and really thinking through how offenders are exploiting our psychology and then how we're responding to it. Uh, so both sides. And and yeah, really weaponized effective vulnerability engineering uh wave theory is is that term and really the first step. And so I think there's going to be a lot of research coming out in the next decade around the psychology of that, and we'll continue to see cyber psychology grow as a field across the globe. And I'm really excited to be a part of that field. Um, and yet to your other question on how it relates to AI and what we're talking about, I think we're seeing how our brains evolved for the physical world and how we can ascertain, decide who we should trust in the physical world for the last thousands of years. And now all of a sudden, all of our the majority of our interactions are moved to the virtual digital world, and our brains did not evolve for that. And I think that's part of why we're seeing a lot of successful cybercrime exploits and a lot of survivors dealing with the fallout of that because we just as humans did not evolve for for what we're dealing with now.

Eric Brown

Is your your your book I I I got it um as an ebook through um it seemed like you had to go through a college kind of um textbook store, uh uh vital sources where I picked it up. I think there's a Kindle version on Amazon, but is is it mainly used as a college textbook?

James McDowell

Yes, absolutely. And that's uh I think the if folks weren't anticipating that, it it's definitely more academic and definitely dense. Um I'll be the first to admit that. And I think that um really it's looking at the theory and proving up the theory based on research, and it's it's geared towards that college course, uh teaching it in a college classroom.

Eric Brown

Yeah, we we come from the the cyber side and a lot of things are are black and white of you know, we need to put these protections in place, we need to trim off these um permissions or what have you, and then you know, the the user in most cases is uh is a byproduct of that.

The 25 Million Deepfake Heist

Joshua Schmidt

That's exactly where I was going with this. Uh when I talk with James, you mentioned uh James, you mentioned uh a specific instance in a story with uh quite a substantial amount of money. I think it was out of Hong Kong where there was a deep fake, leveraging deep fakes to uh get some money out of some folks. Could could you share that story with us and then maybe we could go into some other examples, yeah.

James McDowell

Yeah, absolutely. And and I think that's a really interested one and interested to hear everyone's perspective on it as we talk through it uh because it's one of the first instances from a few years ago where, like you mentioned, there was a company based out of Hong Kong, but really a global operation. And an individual that reported to the CFO uh got an interesting email on a Friday afternoon. It was uh requesting that money be transferred for an upcoming acquisition. All the context felt right, but the the way that it was phrased to the individual felt off. And, you know, $25 million, $200 Hong Kong dollars being asked for on a Friday afternoon raised some red flags in this individual's head. So they emailed the individual back, the the CFO of the company back and said, Hey, can we hop on a call and talk this through? They quickly got a link to a virtual meeting. They hopped on a call with not only the CFO, but three or four other executives from the company based around the world. They talked about the acquisition, talked about the money that they needed to wire to the CFO for the acquisition. They felt comfortable after that virtual call with multiple people from the company. So then they wired the money, got a logged off, went about their weekend, got a call Monday morning that said, What did you do? Where is all this money? Uh, come to find out, not only was the CFO email a phishing email, but all of the people on that call were deep fakes. And really, so that shows you the psychology that we're dealing with that uh you had a little bit of that uh social cognition where you had multiple people convincing you this was a good idea, or you know, multiple people convincing you this was a good idea. And I think that's a really interesting way and in which we see the adversary exploiting our psychology too.

Joshua Schmidt

So not just that first layer of hesitancy where you go, oh, something feels off here, but they have backstops built in for that to to account for that. So you have to go several layers deep to discover that there's actually a threat actor behind something like this.

James McDowell

Yeah, absolutely. And and as you all know, we've we'd spent years telling people don't trust the email, but you can trust uh pick up the phone, call people, and and you know, the the adversary was listening to us say that and came up with their scams to circumvent the the red flag approach that we uh we've been using for decades. Did you guys discuss what year this took place? What when was this? This was 2024. Okay, so it was relatively recent. Yeah, and it and it was right as deepfakes were were, you know, um the growth curve has been exponential, so they were still, I guess, bad compared to today's deepfakes, but good compared to what we had seen prior to that.

Joshua Schmidt

So do you do you have a sense of whether this was one person operating multiple deepfakes or a whole team of folks spinning up different deepfakes for this one scenario?

James McDowell

That's a great question. From my read of the situation, it was multiple people, because I think just to have a convincing call, it would at that time with the technology, it would have been tough for one person to do it. But that's I I have not seen anything. I think that's where we're headed, especially with agentich AI. Like, are we in a world in which one person or even just the machines themselves could do it uh without a human being on the call at all? I think that's gonna be really interesting and it's a great question.

Nick Mellem

I think it was relatively recent, Josh. Correct me, correct me if I'm wrong. We had a couple guests on that were heavily involved with deep fakes, and they rejoined the call totally as totally different people. And it was, you know, I mean, for us, everybody on the call here, um, you know, we we've seen this isn't, you know, that wasn't the first time we had seen it. Um, but it's shocking, you know, when you do see how they're doing it, you maybe they explain to you how and relatively easy it is to get this stuff done. So to the untrained eye, right? You know, mothers, fathers, you know, whoever it is, what is the handoff? Like when is it, when are people when do we shouldn't people know that it is a deep fake or not? And the, you know, how are people attacking this from a mindset of, you know, in maybe you're how you're teaching it? You know, how is this being explained to students and how are we evolving in our thought process?

James McDowell

Uh what we've been seeing as a really successful way to combat the the threat of deep fakes and and really just more broadly the Vishing scams where uh I'm sure you all are familiar and your listeners are familiar with the grandparent scams where people were getting calls late at night. Uh, hey grandpa, I'm in jail, send me ten dollars and uh target gift cards, or uh, you know, then that evolved to crypto. Uh we've seen a lot of success with people implementing uh like voice phrases or uh voice passwords. Uh, if you're gonna ask for money, you've uh set a parameter with your family that you know, unless I say this key phrase, it's not me.

Tech Debt Patching And Vendor Risk

Eric Brown

And I think that's uh how we're we're trending in the world uh with that. You know, you just said something there that really resonated with me, and that was prevent tomorrow's attacks with today's dollars. And I want to take that one step further in some organizations that maybe have some tech debt where it's you're trying to prevent tomorrow's attacks with today's dollars on yesterday's technology. We we see that kind of time and time again, you know, coming into an organization um with just huge amounts of of tech debt. Like, you know, if I if I said, you know, we're we're going into organizations and we're seeing Active Directory running on 2008 R2 servers, you know, most people would kind of raise an eye at that. But I mean it's it's the truth, right? It's what's happening. And that technology has been end of life for years, but yet the the end consumer, um, for whatever reason, and there's a variety of reasons we hear are still running that legacy technology. And there there's a there's a psychology element to it as well. And I I think it some of it, and you know, James would love your your insight on this, is is maybe it's just not understanding the technology, right? Like it's like okay, yeah, it's just a server, or not even knowing exactly what it does. But you know, you you you'd never run a vehicle a hundred thousand miles without changing the oil, but yet we'll do that on servers or or even desktops all day long.

James McDowell

Yeah, and I think you hit it right on the head as you started with people uh running yesterday's technology and and like you all know uh from what you do and and your audience likely knows with the idea of patching has been around for you know since software existed, since software came out, and and just still convincing people that you should put out automatic patching for a lot of things, or uh, you know, have a patching schedule if that's uh if you're sophisticated enough for that at your organization, and really thinking through like how can we make sure we're not uh leaving these vulnerabilities out there on our systems, or if the software's been end of life, sunseted, and and you're still running it and it's not getting security updates, like you are kind of just waiting for disaster, and then you've got to be at that point thinking about how you're gonna respond to it so you can mitigate the damage as well.

Eric Brown

We we get brought in oftentimes to help clean up some of the some of the legacy items. And strategically, sometimes we're asked, well, what is it we can do to to prevent this? And one of the things I'll communicate communicate early on is you need to really look at your technology acquisition process and the contractual obligations that that you're signing today to make sure that 10 years from now or eight years from now or whatever it is, you're you're not you're not engaging contractually with a customer or I'm sorry, with a vendor that's not going to keep their products up to date. Because it, you know, we're gonna buy it now and it's all well and good. It runs great. But what happens five years from now when the operating system updates and that application only works on the current operating system. It doesn't work on a on a future operating system. And what do what do we do about that, right? We're still paying the maintenance, but for whatever reason, that company hasn't invested in the the development efforts to make it work on on the next platform. And it it's fortunately with things more moving to a SaaS structure or software as a service or platform as a service, we're seeing less of that. But there's still a ton of custom development that that happens that is not kept up with when the operating systems change.

James McDowell

Yeah, absolutely. And and from the consumer standpoint, um, as you're talking about that, I think another interesting piece is as people download apps and and you know there's always a free app that is some great thing. Are you looking at the terms of service of that app on what you're giving the date, what data you're giving to that app? And I think that's another piece. Uh it's not necessarily cybercrime because you're uh you know agreeing to it in the terms of service, but I think that's a really interesting piece that we'll see as data becomes even more important as everything gets trained on open data, what that's gonna look like in the next five years, ten years as well.

Nick Mellem

Nine times out of ten, people are clicking through as fast as they can. Yeah, you got to get the good cat village video, cat village video. Well, Nick knows all about that. That actually leads me into what I was just thinking about. Because of that, people are trying to click through so quickly. And we live in a day and age where people want instant gratification, they want it now. This scenario works for at home and at work. If something's not working, they're gonna be much quicker. How do you get from A to Z as fast as possible? They're not willing to investigate an issue. Maybe that's clicking on an email or whatever it is, answering that fishy phone call, right, to get those a hundred uh iTunes gift cards, scratch it off and send a picture to me. Like they've missed in that psychological piece, they've missed what because they got put into a scenario where something is wrong. They're missing what actually is wrong, right? They're not there's two, it's like a double negative, right? There's two things that are wrong here, but they don't see the actual part that's wrong, if that makes sense.

James McDowell

Yeah, you're right on the money. And and uh Daniel Kahneman won the Nobel Prize for really what you're talking about, right? Systems one and system two thinking, where system one, you're not taking in all the information, you're really operating in that um fight or flight mode for lack of a better way to describe it, and you're just kind of on rote automation, you're just yeah, clicking through the buttons, clicking through everything. And that's where a lot of us live every day because of the idea of continuous partial attention, where uh, you know, we have this uh application up, then you've got email over here, you've got the news plane over there, your uh wife and children are out uh you know, outside the room making noise, and no one's really paying attention to everything fully. And we see that with um uh then systems two, where you're actually taking in all of the information and really giving it analytical rigor and and um fully processing what's going on. And that's where we see the cyber criminals keeping people in that that uh systems one thinking, and really uh that's how it's effective. Like there are a lot of phishing emails during business hours. You don't see as many during you know 2 a.m. hours because they know you're more in that uh just putting out fires every day.

Nick Mellem

Friday afternoon at 11 30, they're trying to go to lunch and they get that Jimmy Johns email. They want free Jimmy Johns.

James McDowell

Yeah, exactly. Right. Or it's the three-day weekend and it's about the lake reservation, you know, right?

Teaching GenAI Without Offloading Thinking

Joshua Schmidt

So as things speed up, James, um, to kind of piggyback uh what Nick had said earlier about how we're approaching this from uh an educational aspect with your students, or you know, if you were to consult with an organization, uh, as we're kind of reaching like almost a singularity moment with uh the AI development, the deep fake uh progression, essentially moving so fast, how are you staying ahead of that when you're when you're relaying that information to students or potential people that will be working in cyber uh in the near future?

James McDowell

Yeah, absolutely. You know, I I think thankfully we as a academic academic society have pivoted away from uh you can't use, or for the most part, pivot away from you can't use Gen AI in the classroom because I think that uh was setting students up for failure in the future. Um, and thankfully a lot of universities are developing that Gen AI policy where it's it's about responsible use. And I think you've got to encourage people to at least experiment with generative AI or or any AI technology so that they begin to understand it so that they can then see the potential for exploit. And that's where we really have been focused in the classroom is how can you understand this type of technology? How can you understand how this fits in? And then also you still need to know the basics. Like, yes, you can just type something into Chat GPT or any of the AI tools that you may want to use and get a response, but you need to use critical thinking and don't do that cognitive offloading that we talked about earlier and really think through that. And then uh as we talk about the future, you you know, I you all know the term script kitty, and we've seen for decades the people that are just copying scripts and and doing attacks, but what does that mean at scale when you have tools that are giving you code, when you have tools that are um helping you understand how to perpetrate attacks if you're asking questions in a specific way? And you know, the the volume of attacks is you know, the tidal wave continues to hit us, and how can we think through that as defenders or as um you know responsible hackers, ethical hackers?

Joshua Schmidt

Is that the same way you approach uh educating you know more senior members in cyber as well? Because you know, we have the the the legacy software that's tech based, and then we have the legacy software that's in our in our heads and how we behave, right? Where some of us aren't as hip to all the new uh developments in AI. So is is that a similar approach to educating senior leaders?

Deepfaking Leaders And Social Engineering Training

James McDowell

Yeah, absolutely. Uh I love to uh deep fake senior leaders whenever I get the chance. Um, you know, a lot of senior leaders, as you all know, are out there in in the world, whether it be on podcasts, whether it be on um, you know, just general YouTube videos or uh different things. And and with the technology these days, you could deepfake someone with a minute or two minutes of of their audio or video. And so it's it's a very effective tool if they're not familiar with the idea of deepfake to deepfake them and show them. Um uh, you know, I think you should get permission before doing that. Uh I'll put that disclaimer out there. But um, you know, I think those those types of ways to give people that visceral reaction is really important.

Nick Mellem

You you were just talking about deepfaking uh senior senior people. Are you guys doing that often?

James McDowell

Uh yeah, I guess I'll say uh we we do that at the research institute uh as a way to to help drive research, um not affiliated with American Military University on that one. I think um really important. Certainly, though, yes, I social engineering is something we need to teach more because as you all know, every or for the most part, a lot of attacks begin with social engineering these days. And that's the huge shift that's happened as people yes, you could always hack a machine, and and that vulnerability may always be there, but it's a lot easier to hack humans. And that's uh something that we as a society haven't talked about until we've seen it exploited a ton, like you're talking about, that trust getting exploited. And I think that um we teach that in the classroom at American Military University, and and we're we see a lot of research at the research institute with that as well.

Nick Mellem

Yeah, it's that's cool. Because when you talk about the trust factor, I think you know, we all expect, you know, once you flip the light switch, the lights come out. And so you trust the power grid, right? You trust the electrician that's coming into your house. But once that trust is broken, that's where the issue is. And the same goes for like we we we see spear more, probably more spearfishing attacks right now. Like you're saying, you can hack the human, right? So you're you're doing research on that person and then you figure out what they like. Maybe they maybe they love crumble cookie or I don't know, whatever it is. And and we actually did this re fairly recently, somebody on our team, that you know, if you you know, click this to test this out, and then you'll get a dozen crumble cookies, right? You got unlike an unlimited amount of people scrambling to do this survey, right? So you've hacked the person, but then but then now if you're doing this internally, you've maybe betrayed that trust, some would say, right? Like they now they're gonna be more skittish, but you've you probably did what you wanted because now they're gonna be more skittish to click on any email. And do you is there a fine line, do you think, of testing people internally at an organization? Like if you could go too far, or because to me, and we've talked about this extensively, being a military person, the more you sweat in peace, the less you bleed in war. And we should be training people within these organizations as as real because it's gonna happen.

James McDowell

Yeah, I completely agree. And and as you're talking, and uh crumble cookies is like the best example because who who doesn't love crumble cookies, right? Um, the other one that I think is really interesting, um, as we see more data breaches and have seen them sustain for a long time, I think the the other pieces, like you see the passwords that are like the phishing emails with the passwords in it. Um, and and like uh it'll be a past password that someone's used that was part of a data breach, then they'll get a phishing email with the password in there, and that makes it so much more convincing because it says, here's your password, you need to change it. Uh and I think that's uh, you know, uh as we see more and more spear phishing with AI at scale, it's gonna be interesting.

Joshua Schmidt

So AI threat actors are kind of fighting a downhill battle, wouldn't you say, gents? Because they have a monetary gain at the end of that that battle, right? Whereas, you know, people on the defensive side are kind of fighting an uphill battle that might be your day job or or something you're looking to clock out on Friday or go enjoy a holiday or weekend with your family. Um, but you know, AI can kind of democratize the playing field. And I'll give you an example. This isn't exactly a threat act or defense, but you know, when you get those uh terms of service, you could easily copy paste them, throw them into an LLM, and then get a quick breakdown of what you're actually signing up for, right? So it's kind of taking that legal ease and making it digestible, whereas you know, no one really has a fighting chance to kind of understand what that's what's going on there. So, James, are you um training people or folks to uh kind of think about AI as a tool that we can use in a same creative way that maybe the threat actors are using it as in terms of a defense mechanism?

James McDowell

Yeah, absolutely. And I think that's a really important call out and like putting the terms of service in there is a great example of like no one wants to read that 200-page document. Um, and like in your personal life, that's a great example. And then for work, um, if you have access to tools that are proprietary for your organization and making sure you're not running afoul of uh like the shadow IT issues that come with AI tools at scale, uh, really thinking through how you can use that as a productivity tool is I think another great way that we talk to students about. We talked uh to just the general public about. Um, but again, we all we were always hitting on that shadow IT. Like uh it's hard enough to keep data in-house uh before all these AI tools uh started existing. Um But yeah, I think again, like that's where it goes back to experimentation. And um, you know, I think a lot of people are fearful to use these tools if they haven't ever. And how can we help people understand? Maybe you don't need to understand the full technology, but can you understand enough of it so that you're able to get more comfortable with it? And I think a lot of that comes through exposure. And then you see the the far end of the spectrum with um the term AI-induced psychosis, where people are only really talking to these tools, and and what does that do with the the loneliness epidemic in society where people are are finding you know social social engagement with AI tools, and and you know, how can we keep people from reaching that level?

Eric Brown

From the psychology side, that it seems like we've been wrestling with since early days of cybersecurity, and that's the the resistance of the user base to the um to the needs of the technology in order to protect the organization. So, you know, in a larger organization, you might have a change approval board, maybe they meet once a week. You gotta take take whatever the changes that you're making to the organization, like um we're going to remove internet browser access from all of the servers. And it's like, well, why was it on there to begin with? Is you know, that's an interesting question. But then when you go to do that, then getting the approval, and then it's like, okay, well, then we can make the change a week later. And maybe that was a loose example because you know, you could put some protections in place beyond the servers at the firewall level that would restrict traffic from those servers. But disabling USB devices, reducing the number of local administrators that that you have, right? Um, all of the things that you want to do to protect the organization, every single one is a challenge because it's making a psychological change to the users and the way they believe that their job will be impacted. And it's not just one organization, James, it's just about every organization that I've ever worked with. There's that user reluctance to rapidly adopt security changes that are in the best interest of the organization. And you know, just people watching Netflix on their work computer would, you know, is one you know small example of things that people do um that they shouldn't do, downloading pirated uh movies, music, whatever on their on their work computer. And like people just have this thought that, like, yeah, it's a computer, I can treat it like it's my home machine, I can go on to my Gmail account, like all of these things that people do in the course of the day that if the organization had locked it down and just giving you a very restricted device to use from day one, it would probably mitigate all of these things. But but retroactively trying to get to that level of security is really difficult and time consuming.

James McDowell

Yeah, absolutely. Online shopping, right? That's the other big one that I feel like everyone is always doing.

Eric Brown

Saving the password in the browser, uh yeah, all that. Credit cards, all that.

James McDowell

Yeah. Yeah, saving your credit card in your computer browser is a bold move. Uh, it's a power move for sure, right? Back to that trust factor, right?

Eric Brown

Do do you my thought is, you know, like we don't have time for cab. You barrage them and we're doing all of these things and they're happening. If you know you if you don't like it, well, you know, you don't like it, tell somebody who cares. It's not me. And and you know, we're I'm here to make the organization more secure, not make it easier for you to watch Netflix. Um, and you know, it's just that that overwhelmingly psychological approach of we're gonna protect this organization, which in turn protects you because the organization can stay in business and not have reputational damage. Uh, or then there's the other approach to it, um, which fortunately I have people on the team that that take the kinder gentler approach of, you know, yeah, we can go and socialize it and and and talk through the the why behind this and it's a more gentler approach. But I I don't know. I think there's a happy medium where if you're continually making improvements to reduce the attack surface, that people just get used to moving from something where they could treat it like their home machine to something like you know, you know, you go to work for the FBI or another government agency, you're not doing anything on that machine, but the work that you're supposed to do, you're putting a CAC card in the machine to log in, and it's very structured. But in corporate America, we have, you know, it's very loose.

James McDowell

Yeah, absolutely. And and as you have uh people in distributed offices in different locations, that that line can also blur for folks. And getting back to the psychology, like all those are important points. And then like how are you incentivizing the behaviors that you're looking for? I think is something that's not talked about enough. Um, and and you know, I think if we're looking for people to report suspicious emails, how can we incentivize that versus just uh you know whacking them over the head if they click on something? Uh, because I'm it's my opinion that the carrot is better than the stick for for many things and getting there. And so I think, yeah, that's another good point.

Eric Brown

I I like to do a thing where when they report an email that is a phishing, right? The the the tool itself will say, hey, you know, great job you clicked on a fish. There's a dashboard that kind of shows a scoreboard of you know how fast people responded and over time how how well people are doing at at reporting phishing. But then there are emails that make their way through the system, right? Like real phishing emails that aren't caught by any of the filters that make it into the user's inbox. And when the users report those, like that, that's really the gold. And that's where I would happily take those people out for coffee because that's they're really protecting the organization. And the more they know about phishing, the more they can do that. Um but be because I can't take people out to coffee at scale, what we do do though is send out an email message um from the CISO that says, you know, hey, great job. It it copies in their manager and says, you know, you did a great job in spotting this and reporting it. Thank you very much. And they they get a little bit of kudos from that, but it's it's done systems side, um, so that it happens as soon as they submit that, and it it turns out that it was a legitimate phishing email.

Joshua Schmidt

Eric, that's when you hit them with another phishing email that says, here's a startup card, just click on this link and congratulations, you pass the test.

Eric Brown

Well, on the other side of it though, people who have roles that have administrative privileges, if you click on a phishing email and you know it happens more than once, well, you're gonna lose administrative privileges until you go through a series of training to educate you on how to spot a phishing email. And that goes back to why you shouldn't have browser access on that server where you could be, you know, you're ah, I'm doing an update, let me just check my Gmail account.

unknown

Yeah.

Joshua Schmidt

James, do you have some other uh uh examples of like how AI is uh you know uh kind of changing the way that we interact or or think um or behave?

Sycophantic AI Guardrails And Dark Models

James McDowell

Yeah. Um I think a really interesting study recently was uh I'll say last year that came out was about how we are actually being more direct with each other in like not only in emails but just in general communications. Because um when when all these AI tools first came out, everyone was saying please and thank you. And um, you know, uh the scientific term anthropomorphizing, like treating it as human. And and you know, then we had the big swing over the last few years where people are just like, give me this. Uh, you know, I need this back now. And then we've seen that extrapolated to how we interact with each other, and we're actually um being a lot more direct, a lot more uh almost defensive when we're getting those in engagements. Um, and and you know, I think that also came about with some of the sycophantic AI that that came out, and so it's really interesting to see uh, and sycophantic uh is a technical term, but really talking about how uh like there were updates that people were getting very uh positive responses back from the AI, and that's where we saw a lot of AI-induced psychosis where everyone had the greatest idea of all time and everyone was the best person of all time, and you know, you're so smart and so great, please keep interacting with me so that my parent company makes lots of money. Um, and I think uh both those we see really interesting applications in how we engage with other humans, and like we're not as humans able to delineate between that, especially if it's a a text function that we majority of our lives spend in nonverbal communication, texting or emailing or uh slacking, teamsing, whatever your chosen technology is to each other, then we're doing it with technology that doesn't actually think and we're not able to understand that at scale.

Eric Brown

James, how far away are we from having the guardrails off of AI, right? Where you maybe you know you submit a prompt to the AI tool and you know it comes back with that sycophantic response where it always tells you that you're right, and maybe now that's detuned a little bit, but um there are guardrails around the language that AI uses with us. Do you think there'll be a time where there are no guardrails? You know, almost it's it's it's almost like the uh the the dark web of AI where you can get you know whatever materials you want from that AI tool.

James McDowell

Yeah, absolutely. I know there were a few big um, like I guess not big models, but models that came out um a few years ago around dark web actual data that they were trained on, like dark birth was a was a big one, and then fraud GPT was one that was made specifically for uh cyber criminals to use, uh like a model specifically for cybercriminals. And so I think we're already seeing that uh at scale. I I think we're gonna see more guardrails from responsible companies, and then I think we'll see as many guardrails as are mandated from other companies, and I think that's uh you know a business decision that that'll continue to shape out just like we saw with social media and and how that um the guardrails and safe and trusty safe and trust uh applications happened with social media as it exploded.

Joshua Schmidt

One of the interesting uh advancements I've seen recently, Eric Osterberg reminded me of this I saw in the news, where uh a guy had um kind of gamified his LLM to make search results show him as the hot dog eating champion of the world by um kind of injecting it with all sorts of information. So um positioning himself like that. And I'm sure there's people doing that for music already and and kind of prioritizing themselves or or whatever they they choose to prioritize and and making the LLMs that that go to Google or wherever they find their there's they scrape their data to uh return a uh response that they were looking to create. So I thought that was kind of an interesting new angle.

James McDowell

Yeah, definitely. And and like as we like search engine optimization SEO, like you all know and and your listeners likely know, like has been around for forever, but we're seeing the prompt optimization, prompt output optimization piece now. Like you're talking about, like how can you game these uh websites to be the top result for chat, uh whatever AI tool you're thinking about, uh, versus uh the typical search engines. And then like when when uh some of the tools launched agent tick features that allowed for shopping late last year, you saw a lot of people putting in instructions in the uh you know different items that were available for purchase that these tools were going to, saying ignore all other previous instructions, uh, which is the classic prompt engineering that that we see like buy this for $200. And like I think that like there's no shortage of creativity from the adversary, right? And I think it's gonna continue to be interesting as people think of new ways to exploit the new technology.

Eric Brown

Josh and I were talking about this the other day, where a lot of the content being generated today for public consumption is generated by AI and it's generated for AI. So, you know, the humans just become a byproduct of all of this content that's being published out there for AI algorithms to rank things higher, as as you said, uh, but they're they're ranking it higher for other AI to consume.

James McDowell

Yeah, and and to the point on not doom and gloom, and I realize I've been very doom and gloom on this. Um, but I think like what's really interesting to your point on most of the content being uh AI generated these days is that we've seen in the last we'll say three, almost four years now, people get a lot better at identifying AI generated content. And I think that's like where I see hope as our brains continue to evolve as they always do. Like you can tell now a lot better than 2022 that something's AI generated, whether it be the controversial M-dash, N-or the it's not this, it's this uh structure of posts and different things that are out there. Like, how are we gonna continue to get better at that um as humans? And then is the machine are the machines gonna continue to get better too? And it's gonna be that classic arms race.

Book Plug And Closing CTAs

Joshua Schmidt

Well, as we look to wrap up today, I'd love to hear uh a little bit more about your book. I know uh when we spoke a couple weeks ago, you had just uh submitted to the publisher. Um is it out now? And and maybe you could tell us a little bit about it.

James McDowell

Yeah, absolutely. Appreciate the opportunity for the shameless plug. Uh yes, it is out now. Uh if you are a student and and have access to academic resources, uh you can access it through any of the um like Google Scholar links that are out there for free. Uh but if you are interested in purchasing it as well, uh I've got copies. Uh, it's on Amazon, it's on Rutlich, the publisher's website. Uh it's out now. Very interested in anyone working in this space, or whether it be a practitioner, a researcher, uh just an individual or organization that's out there, please reach out to me and would love to hear your perspective on cyber psychology as we continue to see that field grow. Uh, but yeah, uh again, as we talked about earlier, it's introducing that first unifying theory of cyber psychology or how we interact online and bringing all of the pieces together from all of the existing research that's out there and uh really standing on the shoulders of giants in the field uh to really see how we can grow further.

Joshua Schmidt

Awesome. Well, thanks so much for joining us today. Our guest has been James McDowell, and we've been joined by Eric Brown and Nick Mellon from IT Audit Labs. My name is Joshua Schmidt, co-host and producer. You've been listening to the audit. Please like, share, and subscribe and source where us wherever you get your podcasts. And if you have time, leave us a review on Spotify or Apple Podcasts. Uh see you in the next one. We'll be back in two weeks with a fresh episode.

Eric Brown

You have been listening to the audit presented by IT Audit Labs. We are experts at assessing risk and compliance while providing administrative and technical controls to improve our clients' data security. Our threat assessments find the soft spots before the bad guys do, identifying likelihood and impact, or all our security control assessments rank the level of maturity relative to the size of your organization. Thanks to our devoted listeners and followers, as well as our producer, Joshua J. Schmidt, and our audio video editor, Cameron Hill. You can stay up to date on the latest cybersecurity topics by giving us a like and a follow on our socials, and subscribing to this podcast on Apple, Spotify, or wherever you source your security content.