The Audit - Presented by IT Audit Labs

Safety Science Meets Cybersecurity: Lessons for Risk Management

IT Audit Labs Season 1 Episode 49

Discover the vital intersection of safety science and cybersecurity, where human psychology meets technical innovation. 

In this episode of The Audit, special guest John Benninghoff shares his expertise in safety science and how its principles can improve cybersecurity. From applying safety protocols in the tech industry to enhancing security culture through proactive human behaviors, we dive into a range of topics. Plus, we discuss how risk quantification and ergonomics can drive better security outcomes. 

In this episode, we’ll cover: 

  • How safety science principles can enhance cybersecurity practices 
  • The role of human behavior and psychology in security outcomes 
  • Lessons from aviation safety and their application in risk management 
  • Real-life examples of security clutter and how to reduce it for better outcomes 
  • The importance of risk quantification and proactive system maintenance 

Join us as we explore key insights and practical tips on blending safety science with cybersecurity, and don't forget to subscribe to The Audit podcast for more insightful discussions covering the full spectrum of cybersecurity. 

#Cybersecurity #SafetyScience #RiskManagement #DataProtection 

Speaker 1:

All right, so you're listening to the Audit presented by IT Audit Labs. I'm Joshua Schmidt, your producer. We're joined by Nick Mellom and Eric Brown, as usual, and today we have our special guest, john Benninghoff, and he's going to talk today about safety science. And, john, can you introduce yourself and give us a little background, maybe, how you got into cybersecurity and safety science and what you're working on now?

Speaker 2:

Yeah, so, john Bettinghoff, I live here in the Twin Cities, minneapolis, st Paul.

Speaker 2:

I'm actually currently at my co-working space in downtown Minneapolis at the old Minneapolis Crane Exchange and I actually got into cybersecurity a long time ago when I attended my first SANS security conference in 1998. So I've been doing it a while and you know I got interested in safety because, partly because of my grandfather who was a pilot flew for passenger jets for American Airlines for a long time. Wow and yeah, and I saw in safety, you know, an excellent track record for managing risk. So the risk in flying a plane has steadily gone down over the years. The risk of, you know, house fires has gone steadily down over the years. And then more recently, a few years ago, I went and actually pursued a master's degree in safety science. I was online degree from Trinity College, dublin and you know that really kind of, you know, although I'd done some reading, this really was, you know, a deep introduction to the world of safety and learned a lot from that and I'm starting to apply those lessons to cybersecurity.

Speaker 1:

That's great. Can you give us a little more background of where safety science stems from and how that became a thing, Because I don't know if a lot of people have heard of that. I mean, it's been integrated in all of our lives through the things you mentioned maybe motor vehicles, house fires and airplanes but how does that relate to cybersecurity?

Speaker 2:

I found a really good definition of safety science from a book, the Foundations of Safety Science, and it is that safety science is the interdisciplinary study of accidents and accident prevention. I think that's a really good description of what it is. It's something that really started with the Industrial Revolution in the early 1900s. Mainly you may have heard of Taylor and scientific management. He was kind of an early safety proponent in terms of trying to find kind of optimal ways of working in factories and part of that was making it safer for the workers as well. But you know, over the years safety has expanded A lot of. It is now a kind of a blend of organizational, psychology, engineering and other disciplines and you know today you know it's grown into something that you know we call resilience engineering and you know there are prominent safety scientists that are doing work and some people are starting to adapt that to security as well.

Speaker 1:

I can't imagine that it was a very safe work environment back during the Industrial revolution. Lots of dangerous machinery and shifty management techniques. I'm sure it's sapped the health of a lot of workers.

Speaker 2:

No absolutely, and I think the history of safety as a science has been defined primarily by accidents. So you know, like early mining and industrial accidents were kind of impetus for getting safety. But also plane crashes are often, you know, kind of the cause for innovation. The checklist was the result of the crash of the B-17 prototype. That's when the checklist was invented. And you know, more recently the Three Mile Island accident had a big impact on safety science, as well as the space shuttle disasters.

Speaker 3:

That's interesting, john. It's interesting that you brought up aviation and fire protection. One of our customers deals with fire safety and fire protection. It's kind of interesting to learn a little bit about that industry as we were working with them. But on the aviation side, cirrus airplane came out and now I think others have copied them as well, but there I think they were the first ones that came out with the parachute that would deploy for the full airplane and they're headquartered up in Duluth and they're in Tennessee now as well.

Speaker 3:

But pretty cool aircraft there and a pretty neat concept and certainly has saved quite a few lives over the last what 20 years or so.

Speaker 4:

So, john, we're getting into kind of this whole broad topic of safety science, but I think the first question that I've been thinking about is what does your day-to-day look like? Are you working on any specific projects or how are you pushing the industry forward? What you know, somebody that's getting into this, you know, what would your day to?

Speaker 2:

day operations look like yeah, so that's a. That's a good, that's a good question. I'll answer it this way. I mean, so I think one of the challenges that I definitely learned from safety is you can't just take you know you can't just take practices from aviation safety and even apply them to other. You know you can't just take practices from aviation safety and even apply them to other, you know, to other realms of safety. So, like, what works in aviation isn't necessarily going to work the same way in marine safety, for example. So you know, what I try to do with security is kind of use safety thinking to kind of like change, you know, as a new perspective, as a different way of kind of looking at things and, you know, taking those lessons and kind of adapting them to, you know, to the world that we have today.

Speaker 2:

One of the you know concepts from safety is that we can't, we can't, we can't be successful if we only focus on preventing bad outcomes. So with phishing, we we don't want people to click on the link. We send out training to, you know, to basically try to prevent them from clicking on a link, but that you're trying to prevent something that is a bad outcome. I think a better way of doing it is to actually focus on kind of the positive. So instead of focusing on just don't get them to click the link, focus on getting people to actually report phishing and that allows you to take action based on those reports. And if you use, you know, if you apply kind of look at the system a little bit more holistically you can say, hey, like we're actually going to take those reports and if we get early reports from reliable reporters because we've scored them, then we're going to automatically block the links in that phishing email. So we can actually use automation and the humans in our organization to actually create a secure environment for us, I see.

Speaker 4:

So it's kind of like the study of the actual end user as well, and how you can prevent cybersecurity incidents from happening.

Speaker 2:

Yeah, I think it's part of it. I mean, I think the emphasis in safety is really kind of gone, you know, gone from looking at you know historically looking at individuals and preventing people from making mistakes, to looking at looking kind of more systemically and kind of understanding how can we design the system as a whole with kind of the assumption that people are going to make mistakes?

Speaker 4:

But, you know, control, control, you know I kind of capture, control and prevent those mistakes from from causing further damage and now that we're getting further into the conversation, in a past life I worked in the oil fields in north dakota and through uh, hess, training and osha. So you know, just thinking about this off the cuff, I'm sure they are tied into your industry heavily as well. Right, maybe obviously different swim lanes, but, um, you know, safety as a science I'm sure is big for them no, absolutely.

Speaker 2:

And you know, I'm actually curious to know, like so nick, from your experience, like how effective was that training and how did it, you know, match up with how you actually did?

Speaker 4:

your job? That's a great question. I think we went through it was like a four or five day training from hess this is 15 years ago gas masks, all those different situations that could arise within that. You know field and you practice those things, got real life training and I think having that hands-on training not just book training but then experience from people that had those issues or been through that specific scenario, were able to teach us. So I think you know that was vital to us and it saw while this really can happen, this is very dangerous and it did help make us operate a lot safer and it did help make us operate a lot safer.

Speaker 3:

John, on the safety side, there could be instances where the pendulum may have swung too far and I'd like to get your insights on that and I'll give you, for instance. So done some travel in Europe and have been to a few different countries where you're exploring, maybe, some of the historic sites and, for example, if you're exploring, some say, ruins, and you know they let you go through the entire area, you can walk right up to the edge. You might be three stories up but you can walk right up to the edge, if you wanted to, of, say, an old castle or what have you. And I've done that in a couple of different countries Finland, maybe Greece, a couple others, right, but not third world countries, but countries at the same level that we are here in the US.

Speaker 3:

But then you go to something like that in the U? S and you're going to have yellow bars around the area. You're going to have all this safety stuff, like you can't even approach the edge of it and it in some cases it feels like it's it could be over protecting us from ourselves and and taking away from the enjoyment. And it was so unique to me that I remember remarking like wow, if this was in the US, we wouldn't even be able to be within 10 feet of the edge of this thing. But here we're, right up to the edge. Nobody's falling over, it wasn't a big thing. So I just kind of think about those things from time to time, of how much protection is just enough without going overboard.

Speaker 2:

Yeah, no, that actually is that comes up a lot in safety science. It's interesting. You talk about kind of visiting different you know kind of sites and I recall you're telling the story. I recall I visited Ireland and went to the Cliffs of Moher, probably about 20 years ago, and you could walk right up to the edge. There's no barriers, no nothing. And you know I've been told, nope, it's all fenced off now and you can't get up to the edge.

Speaker 2:

And so I think that it does come up in the context of, rather, do the safety rules interfere with work? And I think this actually I think reflects on security as well. So one of the challenges with kind of a centralized policies, centralized procedures and centralized controls is they have, you know, kind of one way of working, what we call work as imagined, of working. Um, what we call work as imagined, and you know work is done, almost never lines up with work as imagined and often can deviate significantly. So you know it's it's challenging to try to find the right balance where you're keeping, keeping people safe but not interfering with work or, as you say, interfering with their experience of you know of a historical site, interfering with their experience of you know of a historical site. So you know there's even really you know what are seemingly obvious examples.

Speaker 2:

Like you can, you can't drive your car unless your your seatbelt is buckled right.

Speaker 2:

That seems like there's never going to be a reason why that would be something you would want to avoid. Well, in fact, you know like there are cases where that that happens. So the case that I heard on, uh, one of my favorite safety podcasts, which I'll plug, which is the safety of work Um, I think it's really accessible for a broad audience. Um, they talked about a oil and gas company, um, where they're out in you know kind of rural areas and they have to go through several fences and they're going, you know, like a few miles an hour and so they have to, like they're going to buckle up, drive 10 feet and then unbuckle, get out of the car, open up the fence and buckle back in. Probably not. So I think that you know the real challenge is is how do you find that right balance of allowing people to kind of make their own decisions about safety while still keeping them safe and while still having, you know, the rules that you have to have?

Speaker 3:

and how much of that responsibility is on the user or the individual versus the corporation? Right and and right? Probably some of it is because we're a litigious society and if those guardrails weren't in place then there might be lawsuits.

Speaker 2:

With safety. That's one thing that's definitely different than security, Because when you're making decisions about your own personal safety, you're making decisions that could directly impact you. So if you're doing something unsafe and something goes wrong, you could get killed or injured, but when we're talking about security and making security decisions, worst case scenario is probably you get fired. So kind of the personal risk is different, and so that's again one of the things that I try to keep in mind when adapting lessons from safety to security.

Speaker 4:

So I guess, branching off that, john, what are some of the more common safety risks that you're seeing, maybe in software development and I think we just saw that with CrowdStrike and Microsoft, so maybe you can touch on that. Or what are common trends that you guys are seeing?

Speaker 2:

CrowdStrike is a really interesting story. I think you know I've actually did a session at a conference where we kind of talked through that whole incident and you know it's easy, with the benefit of hindsight, to kind of say that hey, you know, crowdstrike made mistakes. They did it wrong, they screwed up and therefore they caused this outage wrong, they screwed up and therefore they caused this outage. One of the big lessons from safety is that, put succinctly, blame is the enemy of safety, because blame is the enemy of learning. So blame impedes learning. And so if we can avoid blame probably not when we're dealing in the legal system but outside the legal system then we're going to actually learn more when things go wrong or when things kind of go almost wrong. So we get to like a near mess With CrowdStrike.

Speaker 2:

I think what's interesting is that they had a lot of very difficult trade-off decisions to make. So you know they have you know, from what I've read and I read reports from both reports from CrowdStrike they did invest a fair amount in kind of quality assurance for their products. They did a lot of testing of the code but they did less testing of the configuration changes that they pushed, but the whole reason why they have those configuration changes is so that they can quickly get you know, basically security protections out to their clients. So like, how do you make the right decision without the benefit of hindsight? It gets tricky.

Speaker 2:

I guess kind of getting back to your question is you know, like you know, what safety risks do I see in software? That was actually part of my academic work. I made the argument that our software systems are in fact becoming safety critical, both in very real ways that it could impact life and health safety, but also in kind of the broader sense of it actually impacts organizations and their ability to operate. And I think the time has come to bring engineering and safety engineering principles to building those software systems.

Speaker 1:

That's interesting. I'm going to serve this up because I'm curious to see what Eric has to say. I know we talked a lot about this on the podcast and in person balancing that safety protocols or implementations versus the fatigue that maybe people in an organization may experience when being educated constantly about this or being warned about this type of stuff, whether it's in software or phishing campaigns and I'd love to hear maybe how some of that safety science has shown up in Eric's work and kind of go back and forth between Eric and John on that topic a little bit.

Speaker 3:

Sure. Thanks, josh. You know this is interesting because I ran into this just this afternoon. So I'm helping my mom through some some healthcare challenges and we were on the phone with a let's call it a very prominent hospital that's Minnesota-based with a global reach. I was on the phone with that hospital getting her account password reset. So, going through the steps to do that, it was something where we couldn't do it over email. It had to be done with a live person.

Speaker 3:

And she's at the point where she's creating the username. So she creates the username and then she's putting in she's, you know. So she's saying to the lady okay, I'm putting in the password, do you want me to put a password in? And the lady's like, yes, and this is the tech support person from that hospital. And the tech support person says, yes, you need to put a password in. Here I use my pet's name and the year and I almost fell out of my chair.

Speaker 3:

And you know to your point, josh, of you know what's that balance between safety and where is that intersection of education? And you know what's that balance between safety and where is that intersection of education. And you know we've been educating users on passwords for 20 some years longer. And then the lady proceeded to explain that the password had to be at least eight characters. And I was just thinking, wow, they were very concerned that my mom was sharing my account her account with me for HIPAA violation reasons, versus having a strong password that anybody could get into. So I found that quite interesting and we do have to put those guardrails in place on the technology side of cybersecurity to prevent the users from injuring themselves or the company as a whole. And the shift away from passwords I think is a great thing as we look at passkeys and other forms of multi-factor authentication, because we just can't rely on the users and the training to provide that baseline.

Speaker 4:

It's probably safe to say they didn't have any MFA after that. That was an option, but not mandatory Interesting.

Speaker 1:

Okay, I want to take it here. I'm really interested what you said, john, about implementing safety science that maybe has come from engineering and taking those principles and applying those to cybersecurity In an instance like password management. It seems like there's a lot of psychology behind it, or maybe you could speak to multiple things, balancing that fatigue versus safety protocols, and how does that tie into the engineering aspect of things or that viewpoint, you know?

Speaker 2:

there's definitely an analog in safety, and one of the studies within safety psychology is ergonomics, because even very early on in the history of safety they realized that ergonomics really can help promote safe outcomes. So kind of the classic example from aviation safety is that pilots had controls and the controls were right next to each other. One control, I think, controlled the flaps and the one control controlled the landing gear, and they were getting them confused. So a brilliant engineer just basically said oh, here's what we're going to do On one of the two levers we're going to put a little set of wheels and on the other one we're going to put like a little wing, and then they won't actually mix them up again. And you know, that kind of has been carried through kind of throughout aviation safety. So if you talk to pilots today, you know they'll talk about things like cognitive load. You actually might start to hear people in technology starting to talk about that as well.

Speaker 2:

It's an acknowledgement that there's just limits of what we can do. We need to absolutely apply those principles to security understanding human limits and supporting people to make better decisions rather than just blaming the user for doing it wrong and I think we're starting to make that transition in security. I think Passkey is a great example of it. It's an acknowledgement that passwords just don't work. I actually was in my last job, fortunate to work with a research psychologist who basically had done research and said, hey, guess what? Passwords just don't work, especially for older folks who have harder time with remembering things, and you know like we can do anything other than passwords make them much better off. So you know, kind of taking that approach of changing the design of our systems to be more human, friendly, like pass keys is is, you know, is one way where you know lessons from safety can be adapted to security.

Speaker 4:

Along this whole conversation we're having right now with fatigue, josh and Eric and I, we read an article a couple weeks ago about you know a gentleman in a workplace that was saying that he's not ever going to go offline, his passwords don't matter, his data it doesn't matter. I'm just a small fish in a big pond. I don't need to be that protective of my data because nobody cares. So I guess I'm kind of curious, john, on what your take is on that, because we seem to be every day working so hard to control this data. You know making environments safe, as safe as possible, being good stewards of their data, and we get one or two, you know, bad apples, let's say, that can poison everybody to not care about the company's data or their data, and so this directly I think, like you said before, a near miss of the same topic of fatigue is are we driving them to that point or are we not doing it enough and maybe the wrong way?

Speaker 2:

So curious about that there. I think within an organization, you're going to get a range of attitudes towards security. In the safety space, they actually talk a lot about safety culture, and how do you actually create a culture where safety is valued and important? Right, I think that's what we try to do with security as well. How do we actually create an environment where security is valued? Technology workers both developers and infrastructure engineers increasingly do understand the importance of security and they actually understand security pretty well.

Speaker 2:

I actually was talking with someone a couple of months ago. He's like I don't really know much about security, but I know you really probably should use multi-factor authentication and apply your patches, and I'm like well, you've just hit two of the top three things that you can do to improve your security. I think that the fatigue that you're talking about often comes from something that I'll call security clutter. So there's an idea in safety that was, I think, created by well, I think the paper is Drew Ray and I'm blanking on his partner's name. They're also the co-host of the Safety Work podcast. But the notion of safety clutter is that, over time, we have a lot of policies that tend to accumulate, and it's always easy to add more policy, both in safety and security. But it's hard to take it away and, looking at the policies, which ones are actually working making things more safe or more secure, which things maybe aren't really doing much and we can get rid of those.

Speaker 3:

And that just, you know like, reduces that kind of fatigue and kind of you know, makes you know, kind of helps you improve the security culture. I'm thinking about a client that we're working with. We recently got brought in to provide some security, leadership and oversight to their organization and, going through the organization, they've got thousands of people that work there and there's over a third of the people that have local admin. And, just looking, it's like what is going on here and it was something that came from the pandemic, where it was just easier to give people local admin when they were remote and they needed to install their software or whatever it was. But then the aftermath of coming out of the pandemic and having malware and phishing attacks coming into the organization, and then you know people with local admin click on that. They don't know that they've clicked on something malicious and you know then we've got a problem.

Speaker 3:

But going back and working to unring that bell has been really difficult because the users feel like, oh, you're taking something away from me, you're taking away something that I could do and now I can't do it, even though they never should have been able to do it from the beginning. Right, these are work computers. They're not your home computer. So I've always found that to be interesting because it's more of a psychology problem than it is a technology problem. Most of the users 99.9% of the users don't need local admin. I'd say 100% don't need local admin. But yet they were given it to install something at one point in time and then it was never removed. So then when you announce that you're going to remove it, it's like well, you know the world is ending. And I'm sure it's like that in other areas of safety, like when we were talking about earlier on the call your trip to Ireland, where you know there were no fences and now there aren't. It's like I've lost something, because now I can't get as close as I used to want to.

Speaker 2:

That was part of my early journey into safety was that. You know, back in about 2008, I was kind of frustrated with the state of security and kind of a de-emphasis of culture and psychology and understanding kind of human behavior and how people think, and I think we're seeing that change in security. I think we're increasingly acknowledging that. You know we have to take that into account and you know it's. It's I mean the, the, the safety. You know the safety program that I, that I involve my academic studies, was in the psychology department, is in the psychology department for a reason because that's, you know, the primary kind of like academic research is really focusing on psychology.

Speaker 2:

If you look at aviation safety, a lot of the technical problems around just keeping airplanes up in the air, like the actual engineering technology problems, were mostly solved in the 70s, right, maybe even a little bit before that. All of the advances in safety and flying after that have been all psychology Things, like something that's called crew resource management, which you know was implemented as the result of the worst aviation accident in history at Tenerife. I mean it was kind of just starting to be come on the scene then, but that really accelerated the adoption Because in that case the pilot took off without authorization and that led to the plane crash. Now, when the crew is working together the pilot flying and the pilot observing they're not even the pilot and the co-pilot anymore are constantly kind of cross-checking each other and basically making sure that neither of them are making mistakes and they're calling out in a way that's kind of non-confrontational. It's designed to basically make the flight safer.

Speaker 3:

There was that incident, and you may recall this one, and I think it was a flight that was of Japanese origin, I believe it was, but there was the pilot flying at that time. The pilot or the captain of the aircraft was doing something that they shouldn't have been doing, or maybe not controlling the aircraft in a way that they should have been controlling it, but the co-pilot, who was first officer and subservient to the captain, didn't want to call out the captain's mistake and I think that led to a crash. I'm not sure if there were fatalities, but that type of seniority or hierarchy between not wanting to culturally right, not wanting to correct the captain, led to that and I think since then the airline training has introduced ways to de-escalate and maybe point out things that were happening that maybe one person's responsibility, but somebody else has oversight.

Speaker 2:

Yeah, absolutely I think I remember. I kind of vaguely remember a similar story which I think it was actually a Korean airline.

Speaker 4:

Also very, was it Korean?

Speaker 2:

I think it might have been yeah, but the rest of the story I heard I remember part of the training was they gave. They said, okay, you're all going to adopt English names. So you know, it was kind of a way again, psychologically, a way to kind of like distance themselves from their traditional culture by by basically using, you know the, the English names in in the in the cockpit to address each other yeah, yeah.

Speaker 2:

So so, eric are, you are you do you happen to be a pilot, then I do fly. Yeah, what's your take on kind of what, what I've talked about? Have I kind of accurately represented it? Absolutely.

Speaker 3:

Yeah, spot on.

Speaker 3:

You know, I I, when I started flying it was about 25 years ago, maybe, maybe a little longer, and there wasn't the level of the crew, resource management wasn't there, the technology wasn't there and even the aviation safety wasn't there. From the perspective of when I started flying, not every aircraft had to have a transponder. But now every aircraft flying, unless it's, you know, in a special category, has to have a transponder. So you know, you can look on FlightAware and see all of the flights around us and in the cockpit you can see all of the other aircraft around you at the same time. And you don't even have to be working with air traffic control if you're in uncontrolled airspace to be able to see those other pilots, which just it makes it a lot safer because you know maybe you're looking out the window before, but you didn't necessarily. It's kind of hard to spot an airplane or a helicopter when you're flying, but now you see it on the screen and then you're like okay, yep, they're over there and, uh, that has really helped a lot yeah.

Speaker 2:

So I'm gonna bring a you know draw analogy to to cyber security. I think that the challenge for us as cyber security practitioners is, like how do we actually create the technology that makes it easier for people to make the right decisions around security? How do we create the right culture? You know, I think it's both kind of the ergonomic aspects of like when you're working on your computer and you're getting that email and we've done some simple things like hey, this email is external, but you know like, maybe we can make that more, a little bit more advanced, where it says, like you know, hey, the AI says this is like a very high risk of being a phishing email, right, so maybe you should be extra careful about it, maybe contact that person on the phone and see if it's legitimate.

Speaker 3:

So and that AI. You know there's some some great tools out there today that that that we work with, help customers with. But some of the nice thing about it is it'll remove it from that user's inbox without the user even having to make those decisions. So 90% of the bad emails are removed and then the ones that aren't are kind of monitored, so to speak. But there's so much, as you know, coming through from an email perspective no-transcript you can convince somebody to click on it. You've really got a good foothold.

Speaker 1:

The air traffic stuff that you were talking about, I was reminded of Airbus and that really bad crash in 2001, where the first officer was using excessive amounts of rudders after some turbulence to kind of correct the plane when it didn't need to be happening, and I think that ultimately was determined what caused the plane to crash.

Speaker 1:

So I'm wondering, john, what do you think about having a balance between giving people some control? Because if you take it all and put it on autopilot, you know and this goes for cybersecurity too I'm sure people just check out completely. You know, and I think you know to Eric's point, that's maybe part of the issue of having guardrails everywhere, maybe it's litigiousness of the society, but if you nerf the whole world, people kind of stop thinking for themselves, and we want to have people engaged in what they're doing right. So maybe you could speak to how you know they're doing right. So maybe you could speak to how you know developing software or implementing these security theories. How do you give people enough control where they're feeling integrated in the process but also like keeping those bad emails out of their email inbox?

Speaker 2:

Yeah. So I think actually for this. I think the good news is is that we actually have an established practice within technology that has a good answer on that and that's site reliability engineering. So if you're not familiar, it was created at Google. It's kind of their approach to DevOps and operations and managing incidents and availability Not by accident kind of.

Speaker 2:

In the SRE community, a lot of safety terminology, safety thinking, safety science, has actually kind of moved into the SRE space. There are a handful of other people who are in technology, who have master's degrees in safety science, not necessarily from my school, but a different school, still an excellent school. Safety principles are being applied in SRE. It's there if you know where to look for, and part of what they talk about is with people.

Speaker 2:

What you want to use the machines for is reducing toil, so those kind of repetitive tasks that you know just kind of wear you down, that you know, that you know basically it's a heavy workload, the things that are kind of like routine and predictable and that frees people up to do what they do best, which is deal with the unexpected, and the unexpected is always happening.

Speaker 2:

You know, in safety, like security, like people are both the the strong and the weak part of the system. They're the ones and the machines aren't going to be flexible and adaptable, but the people are, so when they come up to kind of a novel situation they're the ones that are going to be able to do something to actually, you know, like to actually improve security. I mean, even if you go all the way back to Cliff Stoll and the book the Cuckoo's Egg, cliff Stoll he's a Unix sysadmin at a Department of Energy facility in I think it was the 1980s and he saw a few cent discrepancy in the accounting logs for his system. But that discrepancy I got to figure out why this is that whole thing led him to actually identifying like an East German spy ring and catching them all. Because you know, like you had one sysadmin who was kind of paying extra close attention to what something wasn't right and actually kind of kept digging.

Speaker 1:

Eric, do you have any anecdotal stories on how maybe something like that has happened at IT Audit Labs? Or Nick, maybe you guys could think of a real life scenario where the human factor has been what saved an organization. Well, I can think about a million where that didn't save, saved an organization.

Speaker 4:

Well, I can think about a million where that didn't save the organization.

Speaker 2:

Yeah, I was going to say I actually have an opposite story which I'm happy to report on. What do you do? Yeah, at one of the companies I worked at, we actually had like a DBA get like a phishing email and this was probably a little bit earlier in our sophistication of responding to phishing but it looked wrong and they didn't really know what to do with it because we didn't have a good reporting system yet. But they knew a security person. So they literally walked over to this person's desk and say, hey, I got this weird email, what do I do about it? And as a result, we were able to stop that attack from working. It was like targeting our administrators, trying to get like into our admin accounts, and the security team was able to respond and prevent that from blowing up because that one person literally just happened to know somebody that they trusted in security and walked over to their desk and said something that's exactly what you would want to happen every time.

Speaker 4:

Yeah, you know, go away and ask did you mean to send me that email to get 10 iTunes gift cards and scrape off the back and send me the code? You know, ask the question if you're. I don't think. I think we can all safely say that nobody's going to ever get upset that you asked the question. To be sure, I got a question that doesn't involve the cockpit, but we've kind of been talking about. My question is that we've been talking about the whole time is the question what if an organization wants to do better at safety science? Right, if they want to get into it, if they're realizing they're falling short there? From your point of view, john, is there a specific area to start or is it just so widespread that you just pick and choose.

Speaker 2:

You know, I think I actually mentioned security clutter. That's one of the ways, right? Um, so just maybe take stock and take a look at the security policies and rules that you've accumulated over the years and actually kind of pare those down. I think part of it really is a cultural shift and a mind shift. So, you know, I've had discussions with kind of peers and it's kind of led to well. So a peer of mine made an excellent statement, which was you know, we expect we don't expect the CFO to make the company profitable, but we do expect the CISO to make the company secure. And so I think, you know, part of that is the mind shift.

Speaker 2:

Right, security is, I don't want to say everyone's responsibility, but it's a shared responsibility, like everyone in the organization contributes to the security of that organization. And the you know there's the movie that actually I named my company after, which is Safety Differently, which was something that was created by a safety scientist by the name of Sidney Decker, who actually also is a pilot. A gas extraction company says you know, I had to realize that. You know, as the CEO, I'm responsible for all aspects of the performance of my company, including safety. So I can't you know like I'm responsible for the safety in my organization, not my safety team. Now, obviously the safety team has an important role to play in that.

Speaker 2:

But I do think that leadership of organizations have to understand, acknowledge and accept that it is an executive responsibility to deliver security throughout the org. You can't just put it all on the CISO, and I've seen that happen, right. So one of the CISOs I work with kind of told me about an interaction he had with the CEO and he's like, hey, no breaches, right, and that was about it. And I'm like, basically the message is like if there's a breach it's your fault. I mean like that's not fair to the CISO, it's not fair to security. And I think you know the flip side of that is, as security people we have to, you know, understand how the work is done, how the work is done, how the business operates, and work to support that in a secure way.

Speaker 4:

It's on that thought process too, john. You were saying about the breaches. If it's your fault, that you know, probably not the safety science side, but that's contributed a lot to burnout and maybe less productive work, leading to you know breaches and whatnot.

Speaker 3:

It could come down to span of control too, right. So if I come in as the CISO and breaches are my responsibility and you give me full autonomy to do whatever I think is necessary in the organization to prevent the bre that we might want to do, like remove local admin or you know whatever else we think is necessary to do, then that really creates friction and it really creates a problem, because now you're assigning me responsibility without having any authority to do anything about it.

Speaker 1:

Eric, I'm curious is there a time where you were given the reins to shore up the security and you did have a positive response? Or maybe by leading some co-workers or you yourself being the hero of the day early in your career or enabling other people to have a win in that regard, I had a couple throughout throughout my career.

Speaker 3:

But the ones I look back on um and and have the the most positive thoughts around are ones where we've we've come into an organization and really built a team that could then carry that organization forward. And the team kind of going through trials and tribulations of breach response and then putting in place controls that help the organization better and help the organization evolve and the team really being recognized as helping the organization is what's a win for me. We've done that a couple of times. Nick's been part of that journey and that I think is the most rewarding because you can look back.

Speaker 3:

As Dan Sullivan says, it can be somewhat disheartening to always continue to measure forward. So like, oh, you know, we were going to have 15,000 endpoints protected but we only got to 14,000, right, that can be kind of defeatist Instead of looking back and say, wow, you know, we got all of these things done in the last six months or even in the last week, because in information security it can be a bit oppressive because the noise never stops coming in. But if you look back and celebrate those successes as individuals in the team and then as the team and then as the organization, it's really rewarding and it just kind of flips it around to be able to reflect on that positive experience.

Speaker 1:

Seems like a lot of this stuff boils down to the psychology, john, and you mentioned installing those icons in the cockpit. I think the seatbelt was invented in you know like the 1800s, 1880s or something, but wasn't implemented or mandated until like 1965 or something like that. So do we have a lot of these tools already and it's just more about, like Eric's talking about enabling people psychology and like how we train and create a culture or what's the future look like for implementing these safety protocols and safety sciences.

Speaker 2:

Yeah, no, I think I think, josh, you're right that we really do have a lot of the technologies already. I think a lot of it is about how do we, you know, how do we use organizational psychology to get them implemented in organizations. And you know, eric, you touched on something I think is really important. So one of the big shifts in safety most recently and this is reflected in safety differently in the work of other safety scientists Eric Hollenagle actually coined the term safety too, and you know he says, like you know, you need to focus on not just when the bad things happen and preventing those. You need to focus on the good work that you're doing, because you can't have a science of the non-occurrence of events. You can only have a science based on things that happen. You can't have a science based on things that don't happen, things that don't happen. So you know, promoting the work, promoting the work of security, promoting working securely, I think is really important.

Speaker 2:

I think you know there was a recent academic paper that was shared with me. That was a literature review of what works in security, and the top three things are things that we know how to do, we can do, and two of them aren't even really security activities. Reduce your tax service was number one. That has nothing to do with that. I mean, yeah, that's sort of security, but it's really just about like, hey, do you have a good inventory of your systems? Are you turning off the things that you don't need? Number two patching right.

Speaker 2:

So, patching, maintenance, proactively upgrading your technologies these things really promote security, but they're not. They don't just promote security performance, they promote other forms of performance as well. And you know just like working with organizations and trying to get them understanding, hey, if you want good security and good availability, software is like machines you absolutely have to invest in maintenance in it today for almost every organization and you need a certain level of minimum maintenance done, otherwise you're going to see bad outcomes on the other side. And the third one multi-factor authentication. It's a very well understood technology. Now it's increasingly adopted, but adopting it wholesale can do a lot to improve your security.

Speaker 3:

John, that's real poignant and I'm reflecting back on what you said about patching and I think there's room that we still can grow in those areas. And we've seen all too often in organizations where they've got environments that are set up and then, for whatever reason, on the network side, server side, they're not maintained or patched. They're not maintained or patched and then you run into legacy environments that are end and there's really not a great reason why they're not patched. I mean, you can hear a thousand reasons about why they're not maintained, but there's not really an acceptable reason.

Speaker 3:

And you know, as you look at technologies that are out there today and I'll pick on one, Meraki, from a networking standpoint, I think they did something really good in that the patching is just integrated into the ecosystem of the device. So the device is going to go out, it's going to get its own patches, it's going to apply its own patches and it's going to maintain itself. Of course you have to allow it to do that. And as we pivot to SaaS-based solutions, did a project recently with Dynamics 365, which is Microsoft's ERP system and that solution you are only allowed to be a maximum of two releases, so you have to take one release, then you can skip one, then you got to take the next one, so they don't allow you to fall too far behind, and I think that shift in methodology and in operating is really what we need so we don't end up with these Oracle environments that are older than our kids, because there's that when the technology exists to maintain and support it in an automated fashion.

Speaker 2:

Yeah, absolutely, and I guess what I would. I'm just building on that. You know it kind of funny. So one of the other things that I've worked on that's not really not really related to my safety work, but it's kind of like you know, let's say, adjacent is risk quantification. And I say risk quantification, not cyber risk quantification.

Speaker 2:

And I say risk quantification not cyber risk quantification because I worked with a colleague at our last company and we had a legacy system and you know they actually said, hey, we're worried about outages. Because at the time I was actually was working on the S3 side, working on starting an S3 practice at the company. I said, ok, well, my friend can actually analyze your system and estimate the level of risk in there. Well, he did the digging and what he learned was the risk of the outages was actually pretty small. But along the way, by talking to the users of the system, the business owners of the system, and asking the question well, what else are you worried about? Essentially learned that it was like hey, this is a really old system, it's functionally obsolete. We're worried about losing customers because of it. In fact, we already have lost one and the business risk of running on that system ended up being much larger than the security or the availability risk.

Speaker 2:

One of the lessons from safety and security that I think we can bring to improve just kind of business operations and business performance is that people have a really hard time thinking about risk. So if we can actually kind of show them the risks, if we can show them the risks in a way that help them make better decisions in this case it was an investment decision and so we need to present that risk in dollars and cents, right, and it was the likelihood of losing a specific amount of money. When that analysis was presented to our business partners, they basically said well, we've been delaying upgrading that system, replacing that system, for a long time. But it went from that to like how soon can you get started?

Speaker 1:

That's a great business tip. Show them what it's going to affect the pocketbook and it cuts to the face.

Speaker 2:

Well, that's how businesses make decisions.

Speaker 1:

It makes sense. You know we're almost at an hour. I was just wanting to see if you guys had any other questions or topics you wanted to touch on. Otherwise, you know I'd like to hear what you've been working on recently with your company, john. Maybe you could give us a little insight there and then we could wrap it up for the day.

Speaker 2:

Yeah, so after a long career, kind of based on the success of a talk I did on security differently, I'm starting my own consulting company, which for now is just me, but I'm excited to offer my services to people who want to kind of assess their security posture through a safety lens. We'll actually give you the. Are you doing the positive actions? The positive activities like maintenance that'll give you the good security outcomes and kind of help them with the strategy and creating programs that drive higher levels of engagement with their employees and take advantage of the good work that they're already doing.

Speaker 1:

Well, thanks so much for joining us today, john. It's been a really stimulating conversation and I had a great time chatting with you, and I'm sure Eric and Nick did too. I guess I'll wrap it up there. My name is Joshua Schmidt. I'm the producer of the Audit presented by IT Audit Labs. You've been joined by Nick Mellom and Eric Brown as usual, and then today our guest was John Benninghoff talking about safety science. Thanks so much, john. We enjoyed your time today and hope to see you down the road. You your time today and hope to see you down the road.

Speaker 3:

You have been listening to the Audit presented by IT Audit Labs. We are experts at assessing risk and compliance, while providing administrative and technical controls to improve our clients' data security. Our threat assessments find the soft spots before the bad guys do, identifying likelihood and impact. Or our security control assessments rank the level of maturity relative to the size of your organization. Thanks to our devoted listeners and followers, as well as our producer, joshua J Schmidt, and our audio video editor, cameron Hill. You can stay up to date on the latest cybersecurity topics by giving us a like and a follow on our socials and subscribing to this podcast on Apple, spotify or wherever you source your security content.