The Audit - Presented by IT Audit Labs
Brought to you by IT Audit Labs. Trusted cyber security experts and their guests discuss common security threats, threat actor techniques and other industry topics. IT Audit Labs provides organizations with the leverage of a network of partners and specialists suited for your needs.
We are experts at assessing security risk and compliance, while providing administrative and technical controls to improve our clients’ data security. Our threat assessments find the soft spots before the bad guys do, identifying likelihood and impact, while our security control assessments rank the level of maturity relative to the size of the organization.
The Audit - Presented by IT Audit Labs
Big Data Meets Cybersecurity: Expert Insights from Wade Baker
Dive into the transformative power of data in cybersecurity in this must-watch episode with Wade Baker, where cutting-edge insights meet real-world applications.
Hear from The Audit Team as we discover how massive data sets are reshaping risk management, AI’s evolving role in combating cyber threats, and the surprising insights data can unveil about security incidents. We also dive into ransomware trends, phishing techniques, the ethics of AI, and the critical role of storytelling in decision-making, with some fun nods to fantasy swords along the way.
In this episode, we discuss:
- Using big data to tackle cybersecurity challenges
- Ransomware and phishing trends
- The ethical debate around AI in security
- Unique discoveries from security data analysis
- Practical strategies for influencing decision-makers
Catch this insightful conversation and stay ahead of the cybersecurity curve. Like, share, and subscribe for more expert discussions on the latest security trends!
#Cybersecurity #DataAnalytics #RiskManagement
All right, you are listening to the Audit presented by IT Audit Labs. I'm your co-host and producer, joshua Schmidt, we have Eric Brown and Nick Mellom, and today we're joined by our guest, wade Baker from Scientia Institute. Wade is a alumni of Virginia Tech, as you can see by his sweater there, and we brought him on today to talk about his company. Hopefully, we can go into large data sets, maybe touch on how AI affects that. Get to know a little bit about Wade. So, without further ado, wade, could you give us a little background on yourself and how you became to be working with Scientia?
Speaker 2:Yeah, sure, thanks for having me, by the way, always enjoy doing these kinds of things. So yeah, a little bit about me. Founded Scientia Institute about seven years ago, and that is also the time, by the way. I am an alumni of Virginia Tech, but I'm also a professor at Virginia Tech currently.
Speaker 2:So, I teach some cybersecurity management classes in their business school. So one leg in academia and the other leg in security industry, and it kind of captures what I like to do. I like learning things, I like researching, trying to figure out answers to hard questions and security has a ton of hard questions that really haven't been answered yet, and so that keeps me interested. And I like answering questions with data rather than just taking a guess from you know finger in the wind kind of thing. So I really have spent a lot of my career chasing different security data sets, seeing what I can learn from them and sharing what we learn with the community, and that's what we get to do at Scientia Institute. We get access to a ton of interesting data and our job is mining it, drawing out insights and sharing that in various forms of content.
Speaker 2:So it's a good gig. It's sort of one of those that I said what do I want to do with my life? I want to do this. I wonder if I can create a business out of that. Yay, I can. And here we are, you know, seven years later. So I feel good.
Speaker 1:Excellent. Well, thanks for joining us today. One thing you'll get to know about Eric and the IT Audit Labs we have a game night once a month on Wednesday, so big time gamers. He's really gotten me into gaming, or back into gaming. So I have an icebreaker question for us today, to kind of get the conversation while flowing here. And uh, that is, what is your go-to dnd fantasy character? Uh, I tend to be a barbarian, whether it's diablo or uh, you know, I have a hero quest here as a board game. Uh, we have yet to play it at game night, but I kind of just go for the bash the high strength, count the uh high fortitude and uh, just bash my way through the game. So that's kind of my go-to. How about you, eric? You know?
Speaker 4:for me. I'm more on the uh, probably the more the magic side, so sorcerer, wizard type, um, kind of like the interest intricacies of the nuances in the spell casting and and debuffs or buffs, things like that and just a little bit more of the battlefield control type of characters. But you can do that with healer types too. But Wade a question for you. I see you've got and we've got to hear from you and Nick too on your characters, but I see you've got some swords there behind you. It looks like a bastard sword and some long swords or something. What's the story behind those?
Speaker 2:uh, the story is, I just like fantasy literature and I've had these since I was in high school and, uh, I finally get a chance in my adult life to display them and I'm thrilled about it.
Speaker 1:So wait, I got to interject and say I'm just finishing up once in future. King by TH white. It's been one of my favorite books I've ever read. Uh, are you familiar?
Speaker 2:with that. That one I am not. But uh, I'm jotting that down.
Speaker 1:It's the quintessential King Arthur story from the thirties and it's it's a joy to read. So, okay, while he's jotting that down, maybe we can get an answer from Nick. What, what's your? What's your go-to? Well?
Speaker 3:I had to rack my brain to think of something and since we've been talking, I changed my answer. Uh, pre episode, I was going to go with the Rings and I'm not much of a fantasy guy. I appreciate it, not quite my thing. I like Lord of the Rings. The second one, I think it was I can't remember what it's called. Anyways, there was a scene.
Speaker 2:Can we kick it off?
Speaker 4:I can exit.
Speaker 3:I might redeem myself, because there was a part in when the whole battle is going on. Probably what is it? Three quarters of the movie is the battle. Anyways, legolas Two Towers. Thank you, legolas, right is his name. He's got a bow. Yeah, he looked down. It was a Dumbledorf and they were counting out the kills. They were getting Like 17 or whatever, and they were going on. I'm going with Legolas. He's nimble, he can't beat a bow, right, so that's how I gotta go. What?
Speaker 2:I, uh, I like it I did I redeem myself?
Speaker 2:I it's a good choice, okay, thank you I usually end up going with some kind of ranger character. Um, you know, I always say no, I'm gonna do something different this time. But I always land because I I do like the bow, but I also like you know some some, some combat. You know, they usually have a few spells in their pocket as well, or you know something like that. I thought we were going for actual characters, but one of the early fantasy books I read was the Crystal Shard series, and I don't know if Drizzt, depending on who you ask, is a Dark Elf Ranger character, love that character and in the D&D world. So that would be my answer.
Speaker 4:Have you read the Lies of Locke Lamora yet? Have you read any? Of the books.
Speaker 2:No no.
Speaker 1:I'm 0 for 2 on questions. We're learning things today. I didn't expect you to actually ask.
Speaker 2:I just want to say you know we're not all about the data today. I guess the book I have now that I'm cracking open is Brandon Sanderson's latest in the Stormlight Archive. I don't know if you do any Brandon Sanderson, but that just came out in the last I don't know week or so, so I'm digging into that.
Speaker 1:Cool. Yeah, there's a large library attached to IT Audit Labs. I still have to go dig through that. Eric's a kind of prolific reader, so you're in a good spot.
Speaker 4:I had another question. Go ahead nick.
Speaker 3:What's that I was gonna say? I I thought josh was leaning into something when he said he let the cat oh boy, so way, just real quick.
Speaker 4:Uh, nick's got a lot of hairless cats and we always kind of joke about them. But uh, with, with your um, it looks like you've got an eclectic collection there behind you, certainly, um, the original star wars, which is awesome to see, looks like. Is that, uh, one of the transformers that optimus prime back there?
Speaker 2:so that is optimus prime but. But it's a lego optimus and it actually transforms which to me same thing with the Voltron over there. It's a Lego Voltron.
Speaker 4:Oh, that's cool.
Speaker 2:Yeah, it makes it even harder to make.
Speaker 1:So we can all align on our love for fantasy.
Speaker 3:We'll give you a pass. We'll give you a pass. I'll see you guys later.
Speaker 1:You'll get there Chat, books and stuff as you approach middle age. You'll get there, nick. So yeah. One middle age. You'll get there, nick. So yeah, one of the things I wanted to start out with shifting gears here, wade, is you must have some security background if you're dealing with data analysis around security. So how does your security experience, or what does your security experience, and how does that inform the data analysis?
Speaker 2:I was actually a security person before I ever got into data analysis, but they go very much hand in hand. So I was doing a lot of security and risk assessments. This would have been 20 years ago or more and that went okay for a time. But there got to a point where people started asking questions like how often does this attack happen, and is this one more risky than this thing over here that I'm worried about? And I didn't really have good answers and I didn't like the fact that I didn't have good answers. My response to that was oh, maybe the answer is out there somewhere and I'll start collecting some data.
Speaker 2:And then I became kind of a data analyst. I still don't consider myself a hardcore data scientist. I'm a little bit more of a data storyteller, but I love the whole process and those two feed off each other, I feel like. Because I have security experience, I know what to look for in the data and the types of data that I need to answer the questions that I want to answer, and so that keeps me going, whereas if I didn't, I don't think I would be nearly as good at what I do if I was doing it in a completely separate field. I have no interest in being a data analyst in something that's not security, at least not at this point in my career.
Speaker 4:How did you start digging in on the data side?
Speaker 2:this point in my career. How did you start digging in on the data side? It was probably so. I started a PhD in 2003. And I realized that I needed some data to do my dissertation, and I wanted to do it on some kind of security topic and I landed on risk management and how do we take a more quantitative approach to risk management and decision-making. And that's when I started collecting any shred of data that I could collect. So it's been, yeah, about 20, a little over 20 years ago for that.
Speaker 4:So 2003,. That was really when a lot of information was probably mostly in libraries and starting to come online, right, and you know, now it's like if you're going to the library the data is too old. So how have you seen that change over time for you in your practice about almost real-time access to data?
Speaker 2:Yeah, I definitely, over that 20-year period, have seen a shift from when I first started there was not sufficient data available, it just didn't exist or people weren't making it available or publishing it to now where I think there's plenty of data.
Speaker 2:It's more a matter of how do we make sense of all of this and use it for a specific purpose? And yeah, especially in the security industry, I mean it's kind of a cagey space. People don't just readily publish data because we're supposed to protect data. We don't just put it out there for the world to see data. Because we're supposed to protect data, we don't just put it out there for the world to see, and especially data on risk factors. Now you're talking about when incidents occur and the frequency of those incidents and what controls were in place or not in place and the losses associated with them, which people never voluntarily report that kind of information. And you started to see I think 2005 was the California data breach ruling that required if there was a data breach on a California resident that had to be publicly reported. And then the data started kind of spilling out and getting larger and larger as more and more states did that and making those kinds of data available. So lots of things contributing to the current state of data availability.
Speaker 4:On security factors I noticed I was working with a client recently and they were evaluating a product that would help with AP automation. So essentially, invoices come in, the product would look at those invoices and then assign some automation and workflows to the right approver and you could essentially train the model on. You know, it's like you get these invoices, you match them to a PO, or if there's not a PO, what's the company and then who it should go to in the organization for approval. And the organization was touting their ability not the client, but the product they were looking at was touting the ability to recognize fraudulent invoices. And I've seen a couple they look pretty good where they just they send invoices into the accounts payable department and they don't really know that they're not supposed to pay this Oracle invoice, or if they are, you know, just it looks funny.
Speaker 4:But they were saying that their product was able to detect fraudulent invoices and I said well, you know, does the product then use data that it's gathering from the hundreds of or thousands of other implementations that it's in? And if it sees, you know, a bad Oracle invoice in this company, does it then allow some sort of notification to happen in other companies, or at least allow that option so you could kind of take advantage of this. And they said no, it's really focused on the instance of the particular install for that particular client and it doesn't share knowledge. I was like, wow, that's kind of a gap right of like. You would think that if it is detecting it it could share that. And we see that with some of the more modern phishing tools. If they're starting to see some bad behavior in one client, they let all of their clients know. And I think, where we're at now with the speed of information, if the product isn't smart across its install base, it's really a disservice to the customers that use it.
Speaker 2:Yeah, agreed, and a lot of those things can be shared without putting the target organization at risk. If you get a phishing email or one of those things and it has a link or URL out, well, the next email that is sent that contains that link, regardless of the subject line, regardless of who it was sent from, could be classified as suspicious. Don't go to that link. So there's a ton of that information that can be gleaned and shared to bolster defenses and it's usually a very good idea.
Speaker 3:It's never ending, Wade. I was on your website before we started the show and I was kind of more curious on the IRIS.
Speaker 2:Oh, IRIS yeah.
Speaker 3:Yeah, are you able to speak on what that is or how you guys use it? I saw some eBooks on there. It looked like maybe, if you go on, yeah.
Speaker 2:Yeah. So the information risk insight study I is a series of research that we've been doing for the last four years I think we started in 2000. And it's an effort to demonstrate that you really can collect data on cyber risk and analyze it and share that in a meaningful way. There's long been debate of can you really quantify risk factors in a way that is reliable to make risk-based decisions in security. A lot of people say, oh no, everything changes too quickly and all of this and there's no way to measure it with any degree of confidence. So we might as well just stick with high, medium and low red, yellow, green kinds of things. And so, iris, for us is a very intentional attempt to say no, I mean, look, we can't actually measure these things. Look, we're doing it here. Let's show you how we can come up with probability estimates that are well-founded in historical data and draw a distribution around financial losses of what a typical loss or an extreme loss is and all of these kinds of things and kind of push the industry forward. And you know we do that in kind of a rolling series of reports.
Speaker 2:The latest one we did focused on ransomware, just because that's such a big topic for many and lots of organizations and cyber insurers are losing a lot of money to ransomware, so we wanted to turn that analysis onto it. But yeah, these are free reports. Anybody can get them. There's no registration. Most of them are sponsored by reports. Anybody can get them. There's no registration. Most of them are sponsored by organizations and the last few have been sponsored by the US Cybersecurity and Infrastructure Security Agency, or CISA, and I saw that on your report where you were talking about the top attack techniques that were used by ransomware.
Speaker 4:I was surprised, right, because I've always firsthand experience around phishing and seeing phishing and just reading about phishing being that that top threat vector. But then I I saw that the report called out that the public-facing applications were also very high and that made sense with the waterhole attacks or attacks like SochGhoulish, where sites are infected with this malicious JavaScript and then lots of people are connecting to them and getting your browsers out of date, pop-ups or what have you. So I thought that was interesting and a great way to circle back and just make that top of mind. So I don't know if you had any thoughts on that piece, but yeah, certainly interesting.
Speaker 2:Yeah, it is, and I think that's why it's important to kind of dissect these things and analyze them, because sometimes there's counterintuitive findings Sometimes there are. I think the industry has done a good job with phishing awareness. I'm not saying it's perfect, because people still click on phishing messages a lot, but most people are familiar that phishing exists and they shouldn't just click on everything sent their way and the attackers know that. So they're always looking for new and cheaper, faster ways to exploit targets and it's good to have an understanding of how they're going about that. And these things change, I mean they fluctuate over time. I can remember a time when the remote access, remote desktop or other types of things like that were the number one vector for most cyber attacks that I saw, and these things just come and go and it's good for us as defenders to track them because that helps focus what we're doing.
Speaker 1:You kind of mentioned, Eric, that you had kind of picked up on some patterns there. When reviewing what Wade was talking about has looking at large data sets or kind of seeing trends, how has that informed the way you approach security or personal informational security?
Speaker 4:From my perspective, one of the things that we're looking at now and starting to spend some cycles researching are the different language models because there's lots coming out and the different chat interfaces with them and then using different prompts to get information out of the language models. So, either language models that are private or language models that are public, and one that we've been doing a little bit of work and research with is called Anything, called Anything LLM, and it's a self-contained language model where it's essentially not using your data to train other models, which is something that a lot of customers are sensitive to. So things like Google's Notebook LM, which is a pretty interesting one. Right, it keeps your data private, it uses the Gemini language model. It even creates like a two-person podcast off of the data. But you could say upload a policy document to Notebook LM and then you could have a text chat with that document to say say, you uploaded your security policies. You could then say well, you know how long does my organization require my password to be? And rather than you reading all of the documents, it would just quickly find out and cite how long the password had to be and you know other information about it.
Speaker 4:So we've been going down that rabbit hole of how do we use different prompts to elicit information out of documents, and there's a tool called Fabric which is an open source tool, and there's lots of different prompts in Fabric that you can then press against a large language model. And one of the things that's interesting is if you run those prompts against different language models, you're going to get slightly different answers, you're going to get some hallucinations, and just yesterday I saw we were looking at what were we looking at? I think Grok was one, gemini was another and maybe GPT-01. The answers that were coming back were different and I can't remember which one it was, but one of them threw in a completely hallucinated citation where we went back and we're trying to validate the findings, and one of the articles that it cited didn't exist at all and I was like, wow, this is. You know it's really cool. I mean kind of cool in one hand but not cool in the other. You know we've all seen the article or read the article where the lawyer cited hallucinated findings in a courtroom setting.
Speaker 4:It's easy to do because it looks legitimate and balancing the information that we're getting back from these language models versus reality is sometimes hard to differentiate. So wait, I'm really curious. I've kind of gone on a tangent here about this, but there's so much with big data that you could, if you just took the information, republished the information, and then something else scoops up the republished information, you're kind of quickly creating this kind of false sense of data, right, you're almost, you know, creating these, you know, hallucinations, or generating this data that isn't real, and then if that's published multiple times, then it could essentially become fake news, if you will. So I don't know, to me it's just really interesting.
Speaker 2:Yeah, sometimes I feel like that's no different than the rest of the security industry media. Yeah, sometimes I feel like that's no different than the rest of the security industry media. But I hear what you're saying and you know all of our real analysis. I mean, we haven't trusted that to any kind of AI. Here you can take a data set and throw it in. We've done basic super hey, just summarize this just testing around and playing around, but not in any real sense for a major research project.
Speaker 2:We're doing, where we have done a lot of experimentation sounds similar to some of the things that you're doing is in some background research and data collection. So we try to gather a lot of information on security incidents that occur and we've been working with prompts just to hey. Has this been reported to the SEC? Are there any known financial losses for this event? Give me a summary of this attack chain and the common MITRE attack techniques that were used in this and we've been noticing some hallucinations and things like that. But by and large it's a time saver for gathering those kinds of details and we've been putting some effort into that.
Speaker 3:That looks very promising, I think. Going back to Josh's question, the low-hanging fruit obviously for me is risk assessments, but behavior analytics has been big for us to try to predict you know what might be coming in using historical data from different tool sets that we do use, you know, to make sure we're at our. Some of our clients are more secure, so we've got a small team that's that's working on that. But the big one for me also is policy improvement, because you know at some clients we come across that policies might be four, five, 10 years, you know, unrevised. So we're using, we can paint a picture through data that you know why we think this and then we can shuffle that efficiently into policy.
Speaker 3:Another thing I wanted to bring up is funding. I think has been a big one and I think that's what a lot of organizations struggle with is if you're at a small, medium, large, whatever it is organization, a lot of times they might have a tight purse string right For cybersecurity tools and I think with data and what we're doing to protect our organization whether it's phishing attacks, like we've already discussed, social engineering, whatever it is we can prove with this data background why we these tools are so beneficial to our security posture and our cyber hygiene, we can show C-level suites that aren't maybe in our space or our swim lanes, why this is so important. So you know that's. So. That's a quick answer for me. But hearing you talk, wait, obviously you're super, you love this space, obviously. So I'm really curious on.
Speaker 3:You know, I'm not a day's data scientist either, I don't. You know, I do it minorly for, obviously, our careers. But, uh, what does a day-to-day look like? Like you guys go into the office or you go and wherever you guys are working that day If you are working at your organization you're looking at these data sets. What does it look like? What does your day look like?
Speaker 2:So, yeah, we all work from home, so the office is home and we like that, but normally we are sort of paired on projects. You know, for instance, today I am working with with one of my colleagues and it's all about some software security data that that we have and that we're analyzing and we're we're trying to work it to a report. So you know we're well, you know what's the prevalence of security flaws across applications scanned with static, dynamic and open source scanning, and how old are these flaws and how quickly does it take to remediate them. And so it's just the day becomes sort of chasing these things.
Speaker 2:We want to answer that we think might yield interesting insights, and then things sort of devolve into okay, well, that seems interesting, but how do we visualize that? And should we do it as a bar chart or something more interesting? What options do we have? Does that adequately convey the message that we want to give? What else could we do those kinds of things? So it's a lot of fun. I mean, I enjoy that. I realize that maybe everybody wouldn't enjoy that, but it's good. And so we spend most of our days trying to flush out those findings. Sometimes it's a slog, you know, we have messy data sets, missing data. We don't have the data that we really think answers the interesting questions, and you know so. So there's there's definitely challenges, but it's it's, it's, it's good.
Speaker 3:I it's very much research which I like, but it's applied research which which I like, and it's very much research which I like, but it's applied research which I like. I think you used the term data storyteller earlier, yeah, so kind of what you just said there. It fits it perfectly. So I kind of got my answer from that one little statement, but I appreciated the longer one. I think it's really interesting what you guys are doing, yeah thanks.
Speaker 1:It connects in with our icebreaker of you know, fantasy novels storytelling. You mentioned monetary assessing monetary impact from the cybersecurity risk. How do you use the storytelling abilities or how do you present those findings to organizations to justify spending or mitigation or remediation efforts?
Speaker 2:What's your communication style like there? It's kind of multiple. So when we're publishing a report, we'll have a section that just analyzes this from every direction. Some just statistical techniques. What's a median financial loss from a security incident? What's a 95th percentile more extreme style loss? What does the distribution look like? What can we do with this? And then pivoting that to all right. Well, if you have a certain number of records that are compromised in a data breach, how do you get to a loss? Well, you know, here's a table that you can kind of look up and get some upper and lower bounds.
Speaker 2:So we try to throw a bunch of different visualizations and statistics and descriptions in what we produce. We do a lot of presentations on these things too, because that resonates with people. We try to do a lot of infographics. If you don't want to read a full report on something like that, here's a bite-sized chunk with this one piece of information that we want you to take away. And that's a challenge for us, by the way, because I think the tolerance for long form, heavy analytical content it only goes down over time I don't feel like that goes up. So, given the nature of what we do, which is heavily analytical. We always have a challenge of trying to communicate it in ways that resonate with people and make it easy to consume, and those kinds of things.
Speaker 3:Josh, with your budget questions, I think we were for a long time fighting the you need to have a security incident happen to get funding for something right.
Speaker 3:So the higher ups, or whoever you want to classify, they need to see an issue happen, they need to be, you know, you know, kicked while they're down, let's say, for them to maybe buy that expensive product you had been telling them for years to get.
Speaker 3:And I think one thing that we've bridged that gap through exactly what we're talking about, through the data, painting that picture.
Speaker 3:But I think, with all this research especially you guys are doing, we're able to take that, put it into exactly what you said, into a short form, instead of doing long drawn out, maybe meetings, or learn, teaching them how to use a product for them to see the worth. We're able to paint that picture, as you would say, storytelling, in maybe a manageable way that they would be on board much quicker, instead of of going down the painful route where you're getting that breach on a Friday evening where nobody wants to be working and you know we're up all weekend trying to, you know, rectify the situation, you know. So hopefully at least that's my hope is we've gone, are the days of begging for something and not getting it, needing an incident to happen, then we might get it here with data and what we can present from other applications. Why it's a need you know. Hopefully a budget opens up, and a big part of that too is a risk register eric, I know you got something to say there.
Speaker 4:As a security consulting firm, we often get brought in as a third party expert to come into an organization and help them with security. 99% of the time, we're saying the same thing that the incumbent people are saying, but just because we're a third party saying the same thing, it's like, oh yeah, whatever they say is that we should do that, but the internal organization has been saying that for years. So, wade, I would imagine it's the same for you, right? You're able to articulate and tell that story through data and help those teams articulate the message they're trying to convey.
Speaker 2:Yeah, I agree. In fact, just today I posted something on LinkedIn about human risk and the stat was basically there's a really low number of users that are responsible for the vast majority of phishing clicks or malware downloads and stuff like that. And one of the comments was like yeah, we kind of already knew that. I don't really see this as all that novel and that's true, but that's not really the point. Sometimes it's good to have numbers around or actual percentages and ratios and benchmark that to know well, are you along with everybody else or are you outside the norm in whatever percentage of that that you're seeing? So I think that can be helpful, even if we don't learn something that's just earth shattering and brand new that we never even suspected. Validating the things that we think are true is a good thing and, honestly, something I don't think we do enough in the security industry.
Speaker 1:Has there been something that your analysis has found that's kind of an outlier or surprising, that you might not have been going into data analysis. That kind of stuck out as like oh, oh, wow, that's, that's unique, this is something weird.
Speaker 2:Yeah, I mean, there's always little things like that, you know, in in what we do where. Where huh that? I? I, I really didn't know that and and there are things that that I think run very counterintuitive. I'll give an example from a long time ago.
Speaker 2:One of the early public projects I was involved in I started the Verizon Data Breach Investigations Report and it was from hey, I'm interested, I want to go collect data on actual security incidents and see what I learn and publish this. And that's where that came. And I remember when the first version of that published and we had a statistic in there that 80% of incidents were from outsiders, not insiders. And man, there were people that were just really upset about that and calling it bunk. Everybody knows that insiders are 80% of security risk. This is the trash research. Get this out of my face.
Speaker 2:And very strong reaction because it was a deeply held and very strong reaction because it was a deeply held belief at that time that it was all insiders. And you know it prompted some interesting conversations and but now you think about that and everybody's like, yeah, duh, I mean you know everybody's worried about state affiliated attacks and organized criminal groups and we're still worried about insiders, but I think at that time that was the main thing, and since we've realized that, yeah well, there's a lot more people outside your organization that want to attack you than inside, just by volume. So I like those kinds of discussions when some piece of research or analysis sort of challenges the status quo or the thinking of the day, because I think it's healthy, even if the analysis is wrong or something we can talk about it.
Speaker 3:Wade, you touched on the Verizon breach just a second ago that you worked on. Have you been involved in any of the other ones? I feel like it's happening a lot now. T-mobile has been out a bunch or they've had their incidents, but most recently it was in the news about how SMS messaging and phone calls are no longer considered secure, right, yeah, have you had any involvement or done any research on that?
Speaker 2:I have not. I worked for Verizon for a long time, but I've been gone for almost 10 years now, so I haven't kept up with that. But what you bring up, I think it's better than just a password alone by many orders of magnitude. But as attackers evolve to our evolving controls, then we got to continue to evolve, and that's important. That's part of what we do yeah, exactly that's why we do what we do yeah.
Speaker 4:And I think, just to clarify, you worked on the Verizon breach report. Like the report that comes out every year, it's like 100 pages now. I was just looking at it yesterday, I think right.
Speaker 2:Yes, yes, that is correct. I was the initial person behind that, and then I led that team for seven or eight years and then left. I think it's been going on longer without me than it was with me now, which is cool to see. Yeah.
Speaker 4:I always look forward to reading it. That's good and it's often cited it is.
Speaker 2:It is, yeah, and that was fun to be a part of, because back then there wasn't a whole lot of data on security incidents and it really got a lot of attention in the following because of that, and I don't think it would today, you know, because there's so much more information available on data breaches and people are reporting them and reporting analysis, whereas back then it that that was kind of novel Just picturing.
Speaker 1:Eric on vacation on a beach. Reading the 100 page Verizon security risk report.
Speaker 3:I was laughing to myself about that.
Speaker 4:Yeah, I'd be in the room because I don't want to get sunburned or anything.
Speaker 3:Eric doesn't like the camp.
Speaker 1:I got to get the SPF 50 going Absolutely Big hat gloves the whole work.
Speaker 2:Talk about risk, we did try to make those enjoyable to read by putting a bunch of little jokes and things in there. I wouldn't call them the height of literature or anything like that, but hopefully more fun than your average security report.
Speaker 1:Have you considered adding cat memes to your reports?
Speaker 2:Oh boy, I'm pretty sure there are some in there, we're off the rails.
Speaker 3:Well, okay, I'm going to bring us back with a question because I've been curious about this as we've been talking about it. You know, just because you're in it every day, wade what. You know where we're at right now. You know if we fast forward five or 10 years, where do you see your industry going? Oh man, I know it's a tough one.
Speaker 2:I um, and when you say your industry, do you mean the cybersecurity industry or the data long time, I think very basic. If you just collected some data and analyzed it, you had answers that were new early days, long time ago. And then it gets harder and harder, and I view that as a good sign that we're maturing right. We need to do better analysis in order to answer that next question, and so we have to mature the art of analysis. We have to get better and more refined data or start combining data sets together to answer these questions, and so I think that's what I see. I see continued improvement and abilities to do that.
Speaker 2:I've loved it that data science over the last decade or so has become its own thing, and now security data scientist is like an actual job title. So you've got people that what used to be just super niche thing. People generally realize that, yeah, it makes sense to combine data skills and cybersecurity acumen together because you can do some great things, and I think that unlocks a lot of doors and makes the industry better and more mature, makes better products, makes for better decisions. I mean, decision-making is such a big, important part of managing security, and the more we transform that decision-making from you're just winging it and it's all about how good you are, to where you can rely on good data and make data driven decisions. I think, I think the whole, the whole industry is better served.
Speaker 3:So instead of follow the money, we follow the data.
Speaker 2:We'll still follow the money.
Speaker 3:You know who am.
Speaker 2:I kidding.
Speaker 3:Yeah, I think that's. I think we probably share all that, that same hope too, because the more better we get at reading and sifting through data, I think the better these applications we rely on every day get right, the better we get and these applications that are I don't want to say taking over power that we would do individually right. It might replace a person on your team. I think it's piggybacking off that person or it's watching out for things that go bump in the night, so those things just continue to get better and evolve and you know it's making the security professionals even better. You know reading all these reports and whatever else.
Speaker 2:Yeah, and you guys have mentioned, you know, investment decisions a few times, and one of the things that I really hope happens is that there's continued pressure to vet out the investments that we make in security products and services, because I think for a long time the industry has kind of gotten through without having to do that, but now security budgets are swelling so large.
Speaker 2:I think people start asking questions and executives want more, and that forces better data and analysis to really figure out. All right, this actually works, and here's my evidence for that, because we, by and large, haven't had to do that very, very much, and there's questions we still can't answer, like, if I implement this thing, what reduction in incident likelihood should I expect to see? Or what reduction in losses over the next five years should we expect to have? And we're still not very good at answering those questions. But I think there's more and more and more pressure and expectation that we need to be, which goes back to the data and analysis, and so I think the analysis can actually help the industry with that thing.
Speaker 3:I think that's one thing we do focus on. You were talking about loosening the purse strings, I think, in vetting out the applications we're bringing in because of that budget. Change management boards, I think, are a big deal for that. We assist on one that I can think of in particular, and you know we're working on vetting out different applications. You know goods and bads, why we might want to do that. The business might come with an idea of what they want to do. We can help direct them in a meaningful way that can tell a story about why the budget is the way it is or isn't. So the change management piece, I think, is big for that, at least for me.
Speaker 4:Yeah and Wade, just to go back to the influencing the decision makers, you know kind of day-to-day insecurity or you know often insecurity or other aspects of the technology. Like you know we're all kind of nerds, so to speak, and you know just steeped in this stuff reading it. You know we're all kind of nerds, so to speak, and you know just steeped in this stuff reading it. You know, for for pleasure, right, we could probably probably going to read more scientific news or security news than we are about. Like you know who miley cyrus is dating or whatever, right, I, I think I don't know that's. I mean, that's hard hitting news. Josh might be a Swifty, but you know.
Speaker 4:I actually do know who Miley Cyrus is dating, just if you know, I'm going to know more about a Falcon 9 engine than I am about news like that. But anyway, going back to the influencing the decision makers, have you found any tips or tricks of how to take the knowledge that's going to be impactful for an organization to that leadership, the board leadership, and have that conversation with people that don't know a lot about technology but yet control lots of the budget direction? Do you have any tips for people who might be challenged with that?
Speaker 2:There are approaches that seem to resonate better than others. You know, I think one of the tendencies that is a mistake that security people make is they think, well, here's what I need to do, my job as a security manager and what is important to me. Therefore I'm going to pass that up the chain to those people. And that's usually a mistake, you know, because you know your job is managing the security program, so you need your. You know these certain KPIs and that kind of stuff, but they're not what non-security people at the business level need. So, doing that, translation, culling it down do not think you have to over communicate really boiling it down to what matters. And I think that kind of goes back to risk right. One of the reasons there's been a lot of emphasis on risk and resilience is because that's sort of the bottom line for the business, right, and communicating that in ways that sound like the other executives reporting into the board or business. If security can kind of learn that language, I think it seems to help them out.
Speaker 2:And storytelling, I mean that's been a theme in this conversation. I've heard a lot of board members that storytelling is very effective because that can take a very complicated scenario and kind of make it interesting and, hey, I want to listen. Okay, great, I get what you're saying. Uh, you know, let's make sure that doesn't happen to us, and here are the three things we need to do for that. Uh, those, those kinds of things. I've heard a lot of um, uh, success with and, and heard from people that that that resonates well.
Speaker 1:We'll wrap it up with this. Uh, this last little thing. Wade, you mentioned that when you're dealing with large data sets, you're not relying on AI to ingest that material, but you do use AI for data analysis that then you do bring in to inform your algorithm or your process. What's your stance on AI in terms of just ethics and security at this point?
Speaker 3:Oh man.
Speaker 2:You need another hour, right? That's an interesting wind down question. My stance is developing, honestly, because I will admit that I'm not an expert in AI and haven't put it through its paces and all the use cases and security, and there's a little bit of a wait and see approach that I have on this, trialing it where it's good, and this is interesting.
Speaker 2:In my world as a professor too, been tons of discussion about the use of AI for completing assignments, you know, and and my kind of approach on this is well, if you're going to do that, I'm not going to prevent you from doing that. I mean, that's, that's just one of the tools that the world now has in its in its tool belt. Figure it out. But what you can't do is just have it rip and hallucinate a bunch of junk and pass that to me, and I'm going to be okay with that. You need to do your homework and check it and make sure that whatever it's given back to you, it passes muster and so it's unfolding. I don't have a stance yet and I'm kind of as confused and figuring out like everybody else at this point.
Speaker 1:Well, thanks for your time today, wade. We'd love to stay in touch with you and see how things develop on the AI front and more generally as well.
Speaker 2:Yeah, I'd like to see your sword collection grow, maybe next time we see you, we'll have a few more katana or something like that in the background. Yeah, my, my, uh, my. My kids have a.
Speaker 4:Have a few katanas, they're wooden but uh, you know, they, they, uh, they do like that. Which one of those did you get first?
Speaker 2:huh, I think I got the the excalibur-ish one first and then I probably got this one after I saw Braveheart, because it's the Claymore and this is the Conan sword.
Speaker 1:Any Ren festivals down in your neck of the woods Wade?
Speaker 2:there, there, there, there are, and I, I, I went um, uh, to something. Uh, I forget what it was called recently, but it was yeah along along those lines.
Speaker 1:So I brought my kids to their first ren fest this year. So, um, I might have to pick up a sword. Come on back, nick to your hometown, here I was gonna bring it up though that.
Speaker 3:Did you all see the uh documentary on hbo about the rent fair guide in texas? No, because apparently he apparently gets like an hour from my house. Uh, here and he started basically was the originator of rent fairs and it's the biggest one in the world. People live with their campers on this site, uh, like an hour away, uh and uh, check it out. It's super interesting. The guy's a total nut and it's wild to see this world and it's huge here. It is huge. They had one of the Saturdays two weeks ago. They had like 60,000 people there or something. Oh, my goodness.
Speaker 3:So, it's just crazy. So if you're into that, check out the documentary. The guy is. He's crazy in a good way. That's cool.
Speaker 1:And with that we've come full circle on the conversation. Thanks again for your time, Wade, You've been listening to the Audit presented by IT Audit Labs. My name is Joshua Schmidt, your co-host and producer. Today we have Eric Brown and Nick Mellum from IT Audit Labs, and we've been joined by Wade Baker from Scientia. Scientia I think I got it Scientia. All right, we will be publishing this on Spotify, YouTube and all of the places where you get your podcasts. Please like, subscribe and share with your friends, and we'll see you in a couple weeks.
Speaker 4:You have been listening to the Audit presented by IT Audit Labs. We are experts at assessing risk and compliance, while providing administrative and technical controls to improve our clients' data security. Our threat assessments find the soft spots before the bad guys do, identifying likelihood and impact, while our security control assessments rank the level of maturity relative to the size of your organization. Thanks to our devoted listeners and followers, as well as our producer, Joshua J Schmidt, and our audio video editor, Cameron Hill, you can stay up to date on the latest cybersecurity topics by giving us a like and a follow on our socials and subscribing to this podcast on Apple, Spotify or wherever you source your security content.