The Audit

The Dark Side of Cyberspace: Data Breaches and the Price of Information

July 24, 2023 IT Audit Labs Season 1 Episode 22
The Audit
The Dark Side of Cyberspace: Data Breaches and the Price of Information
Show Notes Transcript Chapter Markers

The Audit - Episode 22

Want to understand the dark underbelly of cyberspace? Join us as we take a deep dive into recent data breaches at T-Mobile, discussing why fewer customers were impacted this time around compared to the January API attack. Get insights on how negligence in security could lead to government oversight and understand the power consumers can wield by voting with their feet. Learn how the fines collected from such breaches could fund cybersecurity improvements in vulnerable entities such as school districts.

Curious about the consequences of data breaches? We shed light on the implications of the cover-up by Uber's former CSO, who narrowly escaped jail time, and the devastating impact of the Next Gen Healthcare breach affecting a million individuals. We also explore the rise of bug bounties as a popular tool among companies and stress the importance of credibility in the realm of ethical hacking. 

Ever wondered about the value of your personal information to hackers? We break down how hackers can misuse social security numbers, addresses, and names, and discuss the increasingly specialized roles within a cyber attack. Discover the sinister world of data brokers, who split and resell your personal information, and the challenges of resetting social security numbers. We also delve into how medical records can be weaponized and highlight the need for cybersecurity audits to safeguard data. 

Listen in, as we offer a compelling analysis of the attacker's viewpoint, the significance of logging activities, and why some attackers end up dwelling within systems for long periods. We also discuss the security maturity needed to protect a company from future breaches once they've been hit. If you're at all concerned about the safety of your personal data, you won't want to miss this deep dive into the murky world of data breaches and cybersecurity.

Speaker 1:

Thanks for joining us and welcome to today's episode of the audit, where we will be discussing recent headlines in the cybersecurity world. Today's focus will be on recent data breaches at T-Mobile, the former CSO of Uber avoiding jail time, and the 1 million people, like you and I, impacted by the next Gen Healthcare breach, if you are not going to want to miss this episode. So stay tuned. You can stay up to date on the latest cybersecurity topics by giving us a like and a follow on our socials and subscribing on Apple Spotify or wherever you source your podcasts. More info can be found on itautolabscom.

Speaker 2:

Recently in the news, the T-Mobile breach is reported by cpomegazinecom. T-mobile is on its second data breach of the year and the news is mixed about it. On the positive side, it appears to impact a fear than a thousand total customers, far cry from the API attack Access the private information of some 37 million people in January. On the negative side, those that are impacted likely had their social security numbers, id numbers, count pins and other sensitive data revealed. Yikes. My first question and I kind of was talking with you, scott, when we were trying to find news articles to discuss today why was there such a small number of customers compromised in this breach compared to the previous breach? What does that tell you about the breach?

Speaker 3:

Yeah, that's a great question, it tells me. So they allude to this API scraping attack. They talk about it a few times in the article here. That was a vulnerability in some fairly major piece of the T-Mobile's customer management infrastructure, where an API is just a way for a human or a computer to talk to a computer, basically, and get those automated responses. If an API is poorly configured or poorly secured, you can sometimes iterate through every possible, say, customer ID value. If you figure out what those look like and shoot that to the API, the API says oh yeah, here's customer number 1234567's customer profile. Hackers can automate that process so they can just basically scrape, as they say, just every single value that that system is willing to spit out.

Speaker 3:

That previous breach is huge, Tens of millions of customers. For that reason, because it was this big target, this one seems a lot more targeted and I think what I've heard around the internet is that maybe it revolved around attacking one particular T-Mobile store or stealing an iPad or a device that belonged to an admin at one particular location. It's very uncommon to see attacks or breaches in the medium size, because either the whole organization did something wrong or some person got targeted in a very specific way, but these kind of in-betweeners are interesting for that way. The article doesn't clarify which really ultimately how it happened, but what I've read is that it had to do with attacking a specific person or a specific location.

Speaker 2:

From the CISO's point of view. Eric, what does that tell you? Is that kind of the same thing you were thinking as well, or did you have any additional thoughts on a news report like this?

Speaker 1:

We were thinking around how frequent T-Mobile seems to be attacked and that they've got some serious internal issues that they don't seem motivated to fix for one reason or another. There was a class action lawsuit against them a couple of years ago. I think. They had to pay 500 million, which is probably a drop in the bucket to them. Think out of that. A little over half went to all of the victims, which you might get 20 bucks at the end of that, which then don't justify the means there. They were supposed to take some of that money and put it towards improving their cybersecurity posture. Obviously it doesn't look like they did, or maybe they have, but we don't know how that 150 million was spent, which gets into some of the oversight that unfortunately, through a class action lawsuit, there isn't great oversight into how the punitive damages were spent by the organization.

Speaker 1:

I think with it's tough right. I think we've talked about this a couple of times in the past. But how much government oversight should there be in companies? Cell phone company is pretty much a utility company at this point where there's really three major players between AT&T, verizon and T-Mobile after their acquisition of Sprint. But where's the oversight there and who holds these entities accountable to make sure that they're protecting our information as they should be, and clearly T-Mobile isn't. As consumers, we can vote with our feet, so to speak, and move to a different carrier.

Speaker 1:

Not that the other carriers haven't been breached as well, but it would seem to me that it would be interesting if there could be some sort of federal oversight where a company like this experiences a breach due to negligence, that the fines collected then could be used for improving cybersecurity posture, maybe with government entities or entities that need it School districts, for example, where we see underfunded programs that have huge effects, underfunded security programs that have huge effects on our population.

Speaker 1:

So if we could find them a couple billion dollars, that goes into a pool along with all of the other companies that we essentially don't really have a choice in using them, and I'm thinking the other utility players and that money could be maybe used for good. I know I went down a track there, but that's just kind of what's going through my mind, because clearly there is, t-mobile is not or does not appear to be motivated to fix their glaring issues because they continue to come up and I get that they're a big target, but there are other big targets out there as well that aren't attacked successfully or breached as much as T-Mobile is.

Speaker 2:

That's interesting. That was going to be one of my follow-up questions is why so many breaches at T-Mobile and what makes them particularly susceptible to these types of breaches? You mentioned they're a big target, but is it just because they're, it seems like a victimless crime? Perhaps because they're such a huge company? What makes a company like T-Mobile such an easy target? You mentioned that they're probably not really meeting the challenge of their security risk. What can be done as far as a consumer? It looks like T-Mobile has reset pins and offered trans-union credit monitoring and identity theft solutions. I don't know a whole lot about this topic, but to me that just seems like maybe just kind of like a courtesy gesture and maybe not really a permanent solution.

Speaker 1:

Scott, I'm sure you've got some ideas on there with that as well, but I'll just throw mine out. And that is as consumers, we do have to protect ourselves, and one way to do that is to freeze your credit with the. Essentially there's four major bureaus, so there's trans-union, there's Equifax, experian and Inovus. And if you freeze your credit directly with those, not going through some sort of intermediary like trans-union to what's the term that they use, monitor the credit or what have you, rather than go through an intermediary like that, you can work directly with the individual reporting bureaus and freeze your credit directly and then, when you need to open new credit for a credit card or a loan of some sort, you can ask the entity that you're working with who is providing the loan, which reporting bureau they work with are bureaus, and then you can just unfreeze your credit for a period of time.

Speaker 1:

So, for example, if you're going to go and get a car loan, you could unfreeze your credit for, say, a week to go and get that loan, depending on, again, who you're working with, either directly through the car dealership and their financing team, or your bank. They'll work with one or two typically, it's dependent on where you are in the country, but they'll work with one or two of these bureaus to do a credit check, and then you just unfreeze it for a period of time. But that's probably the best thing that we can do as consumers. And then the other thing we can do is you had mentioned, josh, the PIN. We can reset the PIN on our accounts, and PINs can be, I believe, nowadays, up to eight numbers. So the default, I think, is four, but you can increase the security by having more digits in your PIN, up to eight, and I certainly recommend doing that as well, because it makes it just that much harder for someone to guess what that four digit PIN is.

Speaker 3:

That's all really good advice. I would add about the PINs that I think even in this article it indicates that those are some of the things that were exposed in some of these breaches. So the thing that's supposed to add that extra security to your account is the thing that's being stolen, which puts us right back kind of at square one. If we can't trust these companies to secure our data or our accounts, there's only so much we can do to try to duct tape extra security onto their bad offerings. You do what you can as a consumer. You get your 20 bucks from the class action and you go buy yourself a latte. That's great. You know you do what you can.

Speaker 3:

But I think we really the US really needs to catch up with Europe and get something like the GDPR where there's just clear regulations and penalties for having these kinds of data breaches. It's not, you know, zero to $20 million that the company could have to pay, or whatever. There's the phrase mandatory minimums right from the justice system, like there just need to be fines and repercussions that companies can see clearly in the distance and use that in their cost benefit analysis to figure out if they really don't want to hire a CISO, or they really don't want to invest $150 million into maturing as an organization, not just slapping some new security products in and saying that you fixed what was broken right.

Speaker 1:

It seems like back in this, in the breach of the 2021 attack, they were able to talk with one of the attackers that got in and they said they got in by just doing routine external scans, so probably something like a tool called Shodan, which kind of shows what appears to be open to the internet, what ports might be available to explore, and that it's just unfortunate that a company like T-Mobile that they could use free tools or paid for tools that they could integrate into their SIM and other security tools, that this is just kind of basic block and tackling, and clearly they didn't have that in place to be able to find what they look like externally.

Speaker 1:

So it's to me it's that negligence, right, they just don't care, and maybe I shouldn't say they don't care, but they're not doing anything about it. They're not interested in investing in security, and that, to me, is unfortunate. So, like you said, scott, if there were some more oversight in those minimum penalties that you know in this case it should have been a couple billion dollars that would incentivize them to maybe take some better steps to protect our data, because it's probably better to spend a couple hundred million on security versus a couple billion in fines, I would think.

Speaker 2:

Sounds like there's not a lot of incentive monetarily for these companies in the United States to be protecting their customers' data. So that's where the regulations would be handy for consumers, Because as a normie consumer, I'm not really aware of all this news. You know, this is not something that I see pop up to the top of my news feed Luckily I'm not a T-Mobile consumer but that is very concerning, I think the sense of the fine would help motivate compliance.

Speaker 1:

Hopefully the fine collected is going to do something good and not just go to some government bucket that is going to be used for something else. That's not necessarily going to improve the overall information security posture in areas that really could use the help, Public sector being one top of mind.

Speaker 2:

Interesting Scott. Do you have anything to add to that before we move on?

Speaker 3:

No, I think we've covered this one pretty well. Josh, you talked from your perspective as a normie consumer, but even people working in InfoSec at this point in time, our eyes just glaze over when another breach headline comes out, because we know that it's going to affect millions of people, we know that those people are going to get free credit monitoring and we know it's going to happen again in the next 12 to 24 months.

Speaker 3:

It's just the cost of doing business and that cost needs to go up and I think we covered that very well, I agree.

Speaker 2:

Speaking of fines, segue. Former Uber CSO avoids jail time and breach cover up. This was reported by securityweekcom. Former Uber security chief, joe Sullivan, was sentenced on Thursday to three years of probation for covering up a data breach suffered by the ride sharing giant. That larger incident occurred in 2016, and it involved hackers stealing information of more than 50 million Uber users and drivers. That's where the glaze comes right. Was extorted Uber and were paid $100,000 through the company's Bug Bounty program. They were allegedly instructed by Sullivan to sign non-disclosure agreements, falsely claiming that no data had been stolen. Wonderful, so I just learned about Bug Bounties. I think that's really interesting. So, scott, could you kind of explain to us normies what Bug Bounties are and why they're important?

Speaker 3:

Yeah, sure thing. So they're used. In the 80s there was something a law passed called the Computer Fraud and Abuse Act, the CFAA, and it basically, in really, really broad language it said misusing any computer system is a crime, it's probably a felony and you're going to go to jail for a long time if you do it. And so that law and some that followed after it for a few decades kind of set this precedent that doing security research was not OK, that if you even so much as poke a website that you don't own or have approval to poke in the wrong way, that's a crime, even though it's a public system that's exposed to everybody. And that was kind of the law of the land for a long time, just a couple of years ago, and maybe Eric remembers more details about this. But that sort of changed and it was stated by I don't know if it's the attorney general or who, but security research was now OK as long as the intentions were good and the person was operating in good faith, and so it was a total sea change. So it actually was pretty like it didn't have the impact in the media that it probably should have, because all of a sudden now ethical hacking is legal.

Speaker 3:

And not only legal, but it can be sort of treated as an extension of companies security programs where maybe a company doesn't have a lot of in-house bug hunting or security fluff seeking expertise but they can have this great public published in the light of day program where they encourage third parties who don't work for the companies to test their products, especially things connected to the internet, and reach out and provide that remediation information and say, hey, I found a bug in your site. It could lead to XYZ, I'd like to help you fix it. And then companies with mature bug bounty programs will have basic, they'll even have rates. They'll say for a low finding I'll give you $100. For a medium I'll give you $1,000. And for a high severity finding that could lead to the compromise of the website, let's say We'll give you $50,000. And it's been really, really successful as everybody now becomes kind of a freelance ethical hacker and can start reaching out to companies proactively and sharing that information.

Speaker 2:

So has a cottage industry of white hackers kind of sprung up around this new kind of posturing industry For sure, as capitalism tends to do.

Speaker 3:

There were these companies that were kind of early to the party, like Hacker One. There's some other ones, and they're actually like software as a service platforms to organize bug bounty efforts. So, just like Indeedcom is kind of a clearinghouse for jobs at all sorts of companies, websites like Hacker One are clearinghouses for bug bounty programs, and so they can help with the like publicating the offerings from each company. They can help with handling the communication between the person doing the disclosure and the actual company behind the scenes. So, yeah, again, it's just all in the light of day now, now that people doing good faith security research aren't treated like criminals anymore.

Speaker 2:

So, since this former Uber CSO is kind of undermining that whole structure, eric, does this punishment fit the crime in your mind? This is kind of proportionate to what he's doing here, because it sounds like this is an important piece of your industry that's going to be going forward and needs to keep its authenticity and its credibility. Sure.

Speaker 1:

Well, I don't know Joe personally or a lot of the inside details behind what happened here. I think it's really easy to scapegoat this guy and say that what he did was wrong, which sure what he did was wrong, but it is extremely unlikely that he was acting alone. First, this is a public company and, under the bright lights of the breach, there are going to be, in the case of Uber, probably hundreds of people involved in the aftermath of the breach discovery. It's not Joe Sullivan on the phone hiding in the shadows talking with these hackers the paperwork and the documentation that he allegedly had them sign this non-disclosure claiming that no data had been stolen. It wasn't Joe that wrote the paperwork up. He's got quite a large legal team behind it and this company was probably very concerned about their stock price and what would happen to the stock price if it was discovered that data had been stolen.

Speaker 1:

So I think it's while he was the fall guy for this and absolutely he could have walked and said hey, I'm not going to, my reputation is more important, I'm not going to take the fall for this or I'm not going to do anything unethical here. Doesn't sound like he took that path, but I don't think we can hang the blame on one individual. There's hundreds of people that were involved in this, and it only came to light after new management, a year later, learned that there was data that had been stolen that wasn't fully disclosed. So was it just Joe that didn't disclose that data Doesn't sound like. It sounds like there was a leadership change over there, probably as a result of what happened.

Speaker 2:

So Joe is probably not just downloading a template of an NDA off of Google and throwing this out on the dark web.

Speaker 1:

Got you no, having been under the bright lights of a breach investigation. It consumes hundreds of hours of time, depending on the size of your organization, lots of people involved, and it's a very intense and time consuming process that multiple people are involved in.

Speaker 2:

What is the vibe like around an office when something like that's going on? Is it like I don't want to go to work today? Or is it like I'm showing up at 5 AM all hands on deck? What's the energy like in those types of situations? So?

Speaker 1:

one of the breaches that I was involved with was related to a third party entity that was breached and we had some data sharing between the organization. So our organization was really focused on, or the organization I was consulting into was really focused on was any of our data exposed and working with that the third party entity who was breached. Trying to get information from them about the breach or what data was stolen is really difficult because their team gets behind a legal team and any call that you have, you have lawyers on the call, they have lawyers on the call and the communication is not fluid at all and you really don't get great information. What they'll do is they say we're still investigating and we'll provide a report at the end of our investigation. So, internally, then, to directly answer your question about what is it like?

Speaker 1:

The teams that we were working with were really trying to understand what data could have been at risk, what breach notification we may have needed to do related to if this data was exposed, we may need to do notification to X number of people. So preparing all of that and trying to understand what that access might have looked like without having good information, a lot of the conversations are around hypothetical understandings of the problem because you don't know exactly how they were breached. So you have forensics teams that are trying to unpack the data and look for, maybe, signs of anomalies, without full cooperation of that third-party entity. So the the vibe around the office quote, unquote people are distracted from their day job, so there's a productivity hit as they're pulled into other meetings to repeat information five, six or seven times, sometimes a lot more. As you go through this and you're talking with different individuals, different legal teams, different breach coaches, really just trying to understand what was impacted and what's it gonna cost at the end of the day.

Speaker 2:

So does. Maybe, scott, you could answer this one are these bug bounty programs are making these companies more susceptible to extortion? Does it kind of open the door for that? What's your thought?

Speaker 3:

on that I would say no, this article does seem to kind of conflate extortion and bug bounty, but the way I'm reading it is just that they overpaid them the, the Bransom, essentially through the bug bounty program, which, to Eric's earlier point, makes it seem more likely that more people at higher levels within the org were involved, because the CISO can't cut a check for a hundred thousand dollars in any organization that has their stuff together. Right, so there was, there was a treasurer or a CFO or somebody who was approving these things. So I wouldn't say that ransoms and bug bounties are at all the same thing, because again it gets back to this idea of intent and kind of ethical disclosure versus take it somebody or some organization for everything that they're worth. So no, I think, just like you know open source software in general and you know the First Amendment in the United States which protects people's ability to speak about controversial things, bug bounties are awesome because they they incentivize and protect and put structure around that process of being a good digital citizen.

Speaker 2:

Yeah, that's great. It sounds like a, you know, pretty valuable addition to your industry, especially when it comes to things like our next topic data breach at NextGenHealthcare 1 million people impacted. Reported by securityweekcom. Health care solutions provider NextGenHealthcare has started informing roughly 1 million individuals that their personal information was compromised in a data breach. Another general question that will help some. You know, maybe some business leaders, business owners, some of the people you work with, understand what it is exactly you guys are doing and the value you're adding. It might seem obvious, but what makes, like social security numbers, addresses, names, so valuable to hackers and in these large quantities, is it to? You know? Do identity theft? Go and open up a bunch of different credit cards and then go on a shopping spree. What are they selling it? What's, what's the deal with that? You know what? What's the value there?

Speaker 3:

Yeah, I'll just say real quick that the in general kind of the cyber crime world has has more and more gone in the direction of kind of specialization and just distribution or separation of kind of roles within what you might call a cyber attack. So in a modern cyber attack that's that's fairly complex and well resourced, you may have one company that's or one outfit that's selling access, so they may have lists of user credentials, user names and passwords, vpn credentials, those sorts of things, and they're not interested in carry in committing crimes. They are, you know, they'll resell that access to a group that is looking to commit crimes and so, you know, you have the people who, who are, who facilitate the access. Then you have the people who act on the objectives, use that access to go in and and steal, you know, personal information, like these cases. But then they get that and they may not even sell it. They may, you know, resell it in bulk to kind of a data broker and that data broker may then split it out and actually go through the data to see what was even there and then if they find you know some high-profile, useful information there that could be used to carry out future attacks, they may bundle that up and resell it at a higher price than the rest of the consumer data. So there's just this like really complex underground industry behind a lot of these types of cyber attacks.

Speaker 3:

And to your to your point, josh, each like medical record itself is not worth a lot of money. I forget what the going rate is today, it might be 50 or 60 bucks or something. But if you have 15 million of them, you know that starts to be valuable in bulk, and so just getting them the smash and grab the, put it on the dark web and then have somebody else kind of figure out how to reuse that information for malicious purposes. That's, that's even another cog in the in the machine here. The other thing about social security numbers in general that most people probably understand but may not think about much is that you can't change those as easily as you can get a new credit card or you know, stop working with you know health partners and go across town to fair view. You know medical record numbers are also fairly easy to change and reset, but social security numbers, I mean they're meant to be with you from birth and all the way until your final days, right, and so when that gets breached. That's a valuable thing to resell, because it's very, very difficult and cumbersome and time-consuming to to refresh that part of your, your life.

Speaker 3:

You know, unless you have a good reason, it's not easy to just go to the courthouse and get a new SSN. There's a whole lot of red flags that that raises, both during the process and afterwards, if you're trying to get a new job and they see that your SSN started in 2023. That's that, yeah, that's something you have to explain right. Are you? Are you in witness protection? Are you a criminal like it's? It's just not a straightforward process and, of course, because SSNs are unique and supposed to be protected, they get used everywhere as someone's unique code. That's supposed to be secret, but when it's leaking out of these organizations by the millions, obviously it's no longer serving that purpose you know we could, if you think about from the medical record standpoint.

Speaker 1:

You know, josh, to your question about what's the kind of the value or the reason behind it, it's all about the data, right? So what could you do if you had millions of health care records? You know, if you think about the information that you have around, the types of drugs that that people are using and the myriad of ways that you could potentially exploit that information, sell that information to competitive drug manufacturers, if you, if you had that person's medical history, you kind of a big data approach of things that you could mine out of there that that might have value to another organization. You know data is power and and being able to transact with that really private information is HIPAA data or or healthcare data is extremely private and companies must maintain due care of how they protect that information. So there's audits that happen or are supposed to happen with companies entrusted with healthcare information. But if all of a sudden, you were able to get your hands on lots of records, that information is pretty powerful because you wouldn't have access to that data potentially otherwise.

Speaker 2:

I believe it's stated in this article that no medical records were accessed. When you're approaching a problem like this from a cybersecurity standpoint, how can you be sure about those types of statements? How can you see what was accessed and what was not accessed and left behind, or you know? Is that kind of part of your assessment on the whole situation, or is there kind of like a crumb trail as to what's been compromised?

Speaker 1:

Yeah, on this one, on this particular one. I think they had two this year, unfortunately, but on this one it was names, birth dates, social security numbers and addresses, which you know. Scott talked about the things that you could do there and I actually worked on this breach. The second one with an entity that had a related product that wasn't impacted here, but I actually formed somewhat of a relationship with their CISO, david Slayzak, from a previous breach. So when this one came up I was able to just give him a quick call and kind of get the scoop as best as he could share on what occurred.

Speaker 1:

So it is nice to be able to have some of those networking relationships where we can just talk to each other as humans and try to improve this state of security. From the previous issues they had at the beginning of the year they were really retooling how their information security program was working and doing a great job in that area. I think what happened here was this was through an entity that they had acquired a couple of years ago and were folding into their environments. I think, josh, you were asking me more so about the social security numbers, or were you asking me something different?

Speaker 2:

No. So when you dive into a project like this, how can you make a statement? Like you know, no medical records were accessed.

Speaker 1:

They would know what the attacker had access to. So, for example, if the attacker had compromised an account, they would know and be able to see what data that account had accessed. Kind of watch that account through the system, if you will. If it was a privileged account, you could see maybe what other accounts it created. Hopefully you've got logging in place to look for those things, otherwise there are times where they could actually go and delete data.

Speaker 1:

In this particular case, it's likely that they know this because the healthcare information was probably stored in a separate environment not accessible from the environment that kept. The information that included you know, social security numbers, birth dates, addresses, things like that were probably not maybe in the same system that had healthcare information. Maybe they were linked, but it sounds like, from their investigation, access control policies prevented one of you know the systems from talking to the system maybe that held the crown jewels. I don't know because I don't know exactly what transpired there, but it is very likely that entities such as NextGen are segregating data that might be of higher value. And, scott, you've probably got some thoughts there too, so I won't ramble on, but I'll let you jump in.

Speaker 2:

So, scott, are you going through like logs, caches of logs, or log in activities essentially in the system, and then looking for, looking for bad?

Speaker 3:

actors At the most primitive. You know that's how it works, right? If you were breached and you just had you didn't have a cybersecurity program internally, let's say you just had all the systems that do your IT stuff and outreach happens, yeah, you're going to be left picking up the pieces. And, tarek's point, the attacker could also have trashed on purpose some of those sets of logs and information that would help you reconstruct what happened and how it happened. So it really depends on how mature your internal kind of incident response and security operations infrastructure is and program is, and that's not something you can really fix overnight. You know you can't just buy a new product to make yourself have that capability. You can buy a product that can give you that capability but you have to bake that into. You know all your applications, all of your systems, all the places that sensitive information are stored, those systems that authenticate users that kind of act as central points of control over who can do what. It's very much different from organization to organization. There are definitely best practices and some companies are way better than others.

Speaker 3:

There was a statistic from a few years ago now and I don't know if this has changed, but it was conventional wisdom and information security at one point that the average dwell time or the time between when an attacker initially gains access to, let's say, a business network and when they're kind of found out and kicked off, is something like nine months, eight, nine months, I think, and so like even if it was a month, that's such a long time for someone to be able to move around within your environment and test things out and find out what they can get access to and start shipping that stuff back out the door, right.

Speaker 3:

But nine months or whatever it is now that's, I mean, that's just an astronomically long period of time. And so imagine a crime that that happened over the course of nine months and involved hundreds of systems and trying to go back retroactively and and put that timeline together, a lot of companies don't even keep security data more than, say, six months, unless they're required to by some regulation. So it's totally different and unknowable from the outside, as Eric kind of hinted at, until it happens. And then you really very quickly in that daylight, you see what your capabilities are and where your gaps are.

Speaker 2:

Especially when you have people out there that can beat Elden Ring like five times in nine months. Well, that's a crazy time to have access to a system.

Speaker 3:

It is. One more thing I'll say about this is and this gets us back to T-Mobile but when a company has been breached once, that's a data point for attackers and they look at that and they see, hey, somebody else is successful here. I bet that this company doesn't have great security practices, because this has happened at least once before, and the last paragraph in this article here confirms that that was the case with NextGen. I think Eric mentioned it too. But when you see a company that's been taken once, there's a lot of opportunity from the same group if that group is maybe reselling access that let them get into the customer in the first place or just from a totally unrelated group that sees this in the news and says I know who I'm going after next. This seems to be a soft target and they may be scrambling right now to get their security act together, but you can't change your security maturity overnight and other companies or other attackers will take advantage of that.

Speaker 2:

This has been a really informative conversation. I think it's going to be very useful to CFOs, other CISOs that might be listening, normies like me and people that are thinking about starting a business or running their own small business or even larger businesses. We talked about cell phone usage, ride sharing and health care Something we all use all three of those pretty much every day of our lives. So can we wrap up by just maybe giving, like a couple bullet points, a quick elevator of what a normal consumer can do to kind of protect themselves against these types of things, since they seem to be inevitable Free, sure credit directly with TransUnion, Equifax, Ennovus and Experian.

Speaker 1:

That will help you immensely if you control when your credit is frozen and unfrozen.

Speaker 2:

It's a great tip. Scott, do you have a couple quick tips for peeps?

Speaker 3:

Yeah, whenever we talk about breaches or victims of computer crime, it's good to remind people that you don't have to be the most secure out there, but you do have to be better than average or better than the guys who are doing it poorly. That's not a very friendly neighbor way to look at cybersecurity, but it's very true. You don't have to swim faster than the shark, you just have to swim faster than your buddy who's a couple lengths behind you. I think that mentality works at the business level looking at just trying to stand up and maintain a mature security program but also at the individual level. Just stick to those best practices that help you stay above, in front of the pack.

Speaker 3:

Use a password manager. Don't reuse passwords across multiple sites. Use multi-factor or two-factor authentication using something like a phone. You don't need to be perfect in your cybersecurity posture. You just need to be better than average in hopes that the attackers will move on to the next, more vulnerable target. Things that individuals can do to prevent themselves from being victims is use good passwords and use a password manager to keep track of your set of unique per site, complex and long, difficult to guess passwords. Even then, protect those accounts also with another layer of multi-factor or two-factor authentication, using something like your cell phone or a security key to make it really difficult to take over accounts. That would give someone access to your personal information or even into your organization that you work for, being kind of that on-ramp for attackers.

Speaker 2:

Great tips guys. Thanks so much.

Speaker 1:

You have been listening to the audit presented by IT Audit Labs. We are experts at assessing security, risk and compliance, while providing administrative and technical controls to improve our clients' data security. Our threat assessments find the soft spots before the bad guys do, identifying likelihood and impact, while our security control assessments rank the level of maturity relative to the size of your organization. Thanks for listening.

T-Mobile Data Breach and Cybersecurity Oversight
Bug Bounties and Breach Cover-Up
Value and Risks of Personal Data
Protecting Against Data Breaches