The Audit - Presented by IT Audit Labs
Brought to you by IT Audit Labs. Trusted cyber security experts and their guests discuss common security threats, threat actor techniques and other industry topics.
IT Audit Labs provides your organization with the leverage of a network of partners and specialists suited for your needs.
We are experts at assessing security risk and compliance, while providing administrative and technical controls to improve our clients’ data security. Our threat assessments find the soft spots before the bad guys do, identifying likelihood and impact, while our security control assessments rank the level of maturity relative to the size of your organization.
The Audit - Presented by IT Audit Labs
Building Resilient, Secure Software: Lessons from DevSecOps Experts
Building secure software isn't optional—It's critical. Here’s how you can do it right!
In this episode of The Audit presented by IT Audit Labs, we’re joined by Francis Ofungwu, CEO of DevSecFlow, to break down the urgent topic of software security. Together with Nick Mellom and Bill Harris, we dive into the common security threats developers face today and discuss the vital steps every company should take to secure their software development lifecycle.
In this episode, we’ll cover:
- The biggest software security threats developers face in 2024
- How to integrate security seamlessly into the software development lifecycle
- The convergence of infrastructure security and software security
- The role of AI in secure coding and software development
- The importance of threat modeling and attack surface reviews
- How to create a more resilient software supply chain and manage risk effectively
Whether you’re a developer, security pro, or IT decision-maker, this episode is packed with actionable insights to elevate your security strategy and ensure your software is built to withstand today’s evolving cyber threats.
Don’t forget to hit that subscribe button and drop a comment below on your top takeaway!
#CyberSecurity #DevSecOps #SoftwareSecurity #AICoding #IncidentResponse #ITSecurity #CloudSecurity #RiskManagement
Welcome. You're listening to the Audit presented by IT Audit Labs. I'm your co-host and producer, joshua Schmidt. As usual, we're joined by Nick Mellom, and today we have Bill Harris from IT Audit Labs filling it out as well, and we're joined by Francis Ofungu, and Francis is a Chief Executive Officer at DevSecFlow and Global Field CISO at GitLab, among many other hats he's wearing here. He's got a pretty prolific LinkedIn page, so we'd like to hear more about that. So, without further ado, I'll hand it over to you, francis, if you could give us a little background on yourself and our topic today will be software security, so maybe give us a little background and tell us how you got interested in what you're doing now.
Speaker 2:Sure, and just a quick point of clarification. You can choose to leave this in, but I left GitLab about a month ago to start my own firm. So you're right, I am the CEO of DevSecFlow. We are a company focused on helping organizations build secure, reliable software, and my background is fairly straightforward like most people in tech or in cybersecurity Been doing this a while, but in different domains.
Speaker 2:So I started off doing sort of the security analyst role at a company called Rackspace, which is a cloud and managed hosting company that is based out in Texas. For a lot of my time at Rackspace I worked out of the London office, where I spent close to six years helping set up that team and establishing a security capability for Europe and EMEA and sorry for EMEA and APAC, I should say and during that time sort of learned a lot around the initial days of cloud and exactly what customers were looking for for all things software security and cloud security. I think that's where I cut my teeth in this space. So the last 20 years I've had a variety of roles in that software security and application security space doing CISO roles, product roles, field CISO roles, like you just mentioned, and really helping organizations understand where their risks truly are in this space and coming up with more robust solutions to help them address that.
Speaker 3:That answered one of my questions, francis. I was going to ask how you ended up doing schooling in London, so it sounds like you were working over there and you ended up doing some schooling.
Speaker 2:Yeah, I'll give you the short version of the story. So I wanted to go back to school to get a master's degree in cybersecurity and I had two options one in Chicago and one in the UK. The UK option was shorter, but it also meant I got to live in London for what was supposed to be 18 months turned out to be close to seven years in total. So yeah that was my sort of entry into that market and that part of the world and I loved every second of it.
Speaker 1:So I usually like to start out the podcast with a little icebreaker question Are you a soccer fan now or do you like to watch football, Francis?
Speaker 2:All of the above. So soccer fan I actually coach my son's under 10 soccer team, so I get to wear that hat on the weekends, but I still like football. I have the fortunate or unfortunate distinction of being a Bears fan, and that's come with its own trials and tribulations over the last few years. But yeah. I tend to dabble in both.
Speaker 1:I can relate as a Vikings fan. We share a similar disposition. How about you, nick? We don't know what side of the fence Nick's on Now he's down in Texas, but he's been a lifelong Vikings fan, so we have a game coming up. Feeling a little torn about that.
Speaker 3:Vikings fan. But uh, I gotta say this weekend I think I'm going for the Texans because the Vikings have let me down for my whole life. So I figured, you know, maybe we can bring it to them this time. So I'll be going for the Texans this weekend.
Speaker 1:Bill, what do you like to do on Sundays, while we're all wasting our day, uh, sitting inside watching TV?
Speaker 5:yeah, what I? I would like to do that too, but I I got a 12-year-old who's really active, so I'm taking her to a softball now for the next several weeks.
Speaker 1:That's great, yeah, well, we'll jump back into it. And yeah, we wanted to talk to you, francis, about software security and that was kind of some of the things. We had our pre-production meeting and that seemed to be something that was really animated you. So I wanted to ask you what are some common security threats that developers are facing right now and when they're developing modern software projects, and maybe you can give us kind of like a high level overview of how you view security and software development.
Speaker 2:Yeah, so I've been focused on software security and maybe application security as well Significantly focused over the last five years. I've dabbled throughout my cybersecurity, but really the last five years has been where I've taken a more deeper view on the subject matter, and really it started while I was on the consulting side of my career, where we were advising all these large Fortune 500 organizations on supply chain threats but focus more on physical, and then the whole digital supply chain came up as, at the time, a smaller risk than the physical supply chain security problems. Supply chain security problems and what we saw in that last, in those last five years, is this dependency that we have on our digital ecosystem whether it's cloud, software, whatever it is that we're doing as part of our digital tool chains has created this new risk. This digital supply chain risk or software supply chain risk that most organizations aren't as equipped to handle as they are their physical security, or physical supply chain risk that most organizations aren't as equipped to handle as they are their physical security or physical supply chain risk. So really in that time when we speak to either CISOs or CTOs or really people that have responsibility for delivering digital products, whether that's through software or any other digital product.
Speaker 2:We're hearing that, because of the lack of cohesion in the development process, trying to get governance from everything from the ideation process to the release process is a challenge. And back to your questions around developers I think it starts with developers, or developers are a huge stakeholder in this process, but it's not just their responsibility. It's how do we get every stakeholder involved, from your developers to your security engineers, to your release engineers or everybody that exists in this whole new it's not even new anymore this whole DevOps work chain or tool chain? How do we get them all singing from the same hymn book around every governance step required to secure the end-to-end development lifecycle?
Speaker 3:Wow that sounds like a heavy lift. You mentioned physical security before. I was just curious if you guys were doing any sort of social engineering work alongside of that.
Speaker 2:So I had the responsibilities for data center security as well in my previous life, so there were some social engineering aspects to that role as well. But doing data center security is an interesting beast because you are going out to the middle of nowhere, to data centers that are unmarked, and trying to figure out the best way to go into man traps or things that are unmarked, and trying to figure out the best way to go into man traps or things that are typically quite difficult to penetrate. But I can't say I miss those days around the data center security side. I'm definitely more comfortable in the software digital world.
Speaker 3:I'm sure you got a lot of fun war stories from those days, oh yeah.
Speaker 2:Oh yeah, it's. There are as much as you want to present your best foot forward or your best case forward for data center security, what you end up doing is having to manage a lot of third party audits. When I say put your best foot forward, I mean you have your SOC 2, your ISO reports that say here are all the controls that you have. But because you have all these fairly large public sector and financial sector institutions that have the right to audit, you're basically managing a lot of auditors coming at your door and trying to validate the controls themselves, and we've had people show up and try to climb water tanks to make sure the water is at the right level or do crazy things that are not necessary from an auditor standpoint. But yeah, as I said, I don't miss those days, do they just?
Speaker 5:show up and try to do that, or do they ask for permission?
Speaker 2:They have permission based on their contractual agreements with just knocking your door, but there is never a no. Audits were the same in my experience.
Speaker 1:Nick and Bill, maybe you could speak to how you view software security and how that shows up in your security practices and then maybe pass it back to Francis to kind of give us some insights or maybe some stories in the real world. How does that show up? What does that really mean? Is that like the CrowdStrike thing that we've seen when those things aren't done correctly? Or how does that show up as a threat or a risk in the real world?
Speaker 5:Sure yeah, so you know. So, francis I see a rift between, say, the folks who work on infrastructure security and the software security fields. How has it been trying to bring those two groups of people together and really collaborate to come up with a more of a holistic security perspective?
Speaker 2:Yeah, I'd actually like to hear more about the RIF because it differs slightly from what I'm seeing. I would say two, three years ago there was definitely that distinction between what you had to do for infrastructure security and what you have to do for software security in terms of coming up with a holistic software supply chain solution. But what we're seeing now, especially as more people adopt cloud infrastructure and use a programmatic way of configuring their cloud infrastructure as in infrastructure as code or just basically sending your cloud provider instructions via scripts and code there is that intersection between the type of tests you have to do for your software security health and your infrastructure security health, your software security health and your infrastructure security health. So I'm actually seeing those two worlds converging to having the same pipeline of activities that you have to follow to achieve the same level of assurance. So, if you don't mind, I would love to hear what you're seeing in terms of that riff that you mentioned.
Speaker 5:Yeah, so I think you're right in, and so far as there is this convergence from one premises infrastructure into the cloud, and then you're changing the whole paradigm from infrastructure on hardware to infrastructure as code, right To your point, it's coming together Really kind of the quote unquote rift I'm referring to is for private clouds, right, and private infrastructure, where you have teams of people highly skilled in securing their infrastructure, securing their servers or networks, their IAM and so forth, and then you have the developers who may have to have to work within all of that, right, and how do they secure their applications and have their applications work in a way that's compatible with sort of this self-managed infrastructure, right, where typically you don't have infrastructure engineers who are super skilled at development tasks. So there's a there's a little bit of a language barrier there.
Speaker 2:Yeah. So I would say that my experience in that space is developers are obviously trying to move at the speed of the business right and they're looking for some level of frictionless guidance to implement those security best practices or those compliance obligations or whatever you want to govern that end-to-end delivery lifecycle. The challenge that we have today is an over-reliance on tools within that delivery lifecycle and tools for lack of a better description garbage in, garbage out. So if you don't have the engineering capacity to program them or to implement the tools to have the context of your environment, you're going to be getting a lot of false positives and things that essentially you know detract from the trust that you're trying to establish with your developers. There are so many examples in my last two gigs or last two stops, where we've gone to developers and said here are all the spreadsheet of CVEs in your specific code and then, like, the first four lines are not applicable, this one is not exploitable in production. And they just go down the list and do a very high level triage of the findings and tell you why your tool is not giving you the information that is required to secure the entire delivery lifecycle and it starts to erode that trust and then they start to have this conflict or this lack of collaboration between what security is saying and what developers are trying to achieve.
Speaker 2:The way you address that is having some level of understanding from the security side around what the value stream from again ideation to delivery looks like within your organization and really get into the weeds for lack of a better term around the delivery cycle.
Speaker 2:So you're giving more bespoke guidance to your software developers and not just something that you can't really explain back to them, because whatever scanning tool you've implemented has said that's a problem and the way that we've seen it work in the past is really engineering solutions throughout the development lifecycle.
Speaker 2:What do I mean by that is not just doing static scans on the code of the developing, it's how are we doing permissions? How are we doing the build process? Can we get some level of attestation or assurance that your build artifacts have gone through some level of provenance? There are so many steps that you have to do validation on and if you can create a paved road or a frictionless paved road for developers to go through that level of testing, then you have essentially won them over for lack of a better term if you're able to make that part of their instrumentation and not just here's a document or a Confluence page you have to go read on secure development. It has to be those instructions codified into the development lifecycle. A lot of what we're seeing is how do we get an engineering solution for an engineering problem?
Speaker 3:Sounds like you need to be a good project manager.
Speaker 2:To some degree. Yeah, you have to understand the end-to-end delivery lifecycle which comes with project management, but you also have to fortunately or unfortunately have some engineering knowledge as well. That says here's what good looks like. And if you don't have the engineering knowledge, you have developers or engineers that have that knowledge and you can collaborate with them and have them be part of the solution, as opposed to sort of throwing stone tablets from above.
Speaker 1:So I'm going to be coming in with the kind of the lowbrow questions here, since my background is music, music production and then audio production and stuff. So I get going to be coming in with the kind of the lowbrow questions here, since my background is music, music production and then audio production and stuff. So I get to play the role as kind of the average Joe in these types of conversations, because they get really complicated really quickly, right? So my question was are you training, basically training developers to start thinking more like cybersecurity professionals in a way, then? Or how does that show up when you're guiding them to start thinking about each one of these steps from a security standpoint? Is it based in coding or is it just kind of a multi-stage process throughout the whole development?
Speaker 2:So I think it's both sides. You have to educate both sides. You have to have developers think in more cybersecurity terms, and then you also have to have cybersecurity folks think in more engineering and developer terms. Maybe you don't have to choose, but if I had to choose which side to focus my efforts on meaning, do I get more developers to think like cybersecurity folks or more cybersecurity folks to think like developers? I would do the latter. I'll have more cybersecurity folks think like developers and understand the engineering lifecycle, because I find that us cybersecurity folks are better equipped to have those engineering and developer conversations if we truly understand what their value streams look like and I don't know if a lot of us in the cybersecurity space are there yet and especially on the compliance side, where we're trying to get evidence around the right things being done throughout the development lifecycle, we're not asking the questions that will give us that level of assurance. In my experience, yeah, Francis.
Speaker 3:No, I agree with Francis, what you're saying. We definitely want engineers or cybersecurity professionals that can think more engineering, speak, and I think that speaks to what you were just saying. It closes that gap of the rift right. It brings everybody closer together. People can you know I guess, for lack of a better term speak out of both sides of their mouth, right, they understand what's happening on both sides, and that's kind of why I brought up the project manager portion. So I think, if you can, you kind of able to pull both sides together, if you can walk that walk, especially in an audit. I do a lot of audits day to day for the organizations I'm working with now and a lot of times we find people running away from us with their, you know, hair on fire because they don't want to talk to the guy that's holding the audit in their hand.
Speaker 4:So you know, but if you come, to them being able to speak.
Speaker 3:You know the terms that they're used to and you know in a polite manner, obviously, but speak to what they're used to, what they work day to day. You know I find a lot better outcome.
Speaker 2:Yeah, and to your point around project management, I think the best project managers that I've seen know what good looks like. They don't necessarily know how to engineer or how to write code, but they know what good looks like, what a good project execution should look like, and I would say the same thing for audits and for compliance and the security personas, that they should know what good engineering looks like, a well-governed engineering looks like, without having to know exactly how to execute it. And that training and that understanding it's something that I'm asking a lot of the people that I work with, both in our firm and the customers that we serve to look into as part of their continued development.
Speaker 3:Yeah, I guess that you know there's a lot going on. Obviously every day in the news, you know every day on Bleeping Computer there's something crazy going on. But you know, we talked about this, you know, in the pre-production meeting but also wanted to bring it up to get your thoughts on the recent issue with CrowdStrike and you know what they're doing and you know maybe what tactics or tools could they have used to make sure that situation didn't happen or what could they do going forward to make sure it doesn't happen again yeah, so I'll avoid speaking about crowdstrike in specifics.
Speaker 2:I know there's a lot of details out there around what what happened and what may have happened, but I would say that what happens to crowdstrike is emblematic of what we're discussing around an over-reliance on tools and not the right engineering to apply those tools into our system. So the way we rely on software today, and not just CrowdStrike if you look at SolarWinds from about three, four years ago or the Kaseya issue from a few years back as well is when there is an issue within that supply chain and in this case, in the CrowdStrike case, it was obviously something that was at a kernel level. We don't know how the issue got there because we don't know how these deployments happen within our environments and we don't have a good recovery plan to get back to green. And what I mean by that is we are wholly reliant on either the vendor or whoever installed whatever software we're using to get it up and running and we don't have the in-house knowledge or resource to know exactly how to back out of that specific updates or back out of that specific install. That over-reliance is what's driving our resilience challenges when it comes to software supply chain risks and whether it's today with CrowdStrike or the next one. We need to invest in some level of know-how within the organization to say not just, you know, the vendors need to do right by us, but we need in-house resources to understand exactly how to recover from such an incident, because these incidents aren't going away.
Speaker 2:The speed of software delivery means that there's going to be another one just around the corner and different from the CrowdStrike incident, but Lock4Jay three years ago as well. We knew that there was an issue, but we couldn't find out which of our components or which of our software relied on that specific package or that specific library. So the asset management or the understanding of the environment was something that was lacking for a lot of organizations and that's why they struggled with the recovery. So, repeating myself a little bit, the true path forward is getting some level of analysis around where we have these things deployed, how to back out from those deployments, how to quickly recover when we identify the next issue, because it's something that is going to happen again, and I'm pretty sure I can. I can bet money on that.
Speaker 3:Yeah, I agree with everything you said. I think a lot of us woke up Well in the middle of the night to a very real tabletop exercise that hopefully we don't have to do again anytime soon.
Speaker 2:Exactly exactly, and you know tabletop exercises. You know something that I've done throughout my career but we're always in a position where we're doing more of a breach type tabletop versus a resilience tabletop. Obviously, we need to do both and as we look at the software landscape and we see our dependency on software increase over time, we have to start doing these resiliency tabletops, because that's what's going to be as important as you know, confidentiality issues or breach issues going forward.
Speaker 3:Yeah, so we want to play more offense I think you're describing them play defense. We don't want to react all the time. We want to be thinking about these outcomes and, you know, get a solution in place for all these different situations that could come up.
Speaker 2:Yeah, especially for critical customer facing assets that we're dependent on for the life cycle of our business, the lifeblood of our business, we need to make sure those assets, especially if they're cloud assets, have some level of recovery or some level of resilience, because those tools or those platforms aren't going to be up 100% of the time Make sure that call tree is those phone numbers are updated.
Speaker 3:Yeah, absolutely.
Speaker 1:I want to throw it to Bill for a second and back up a little bit on our conversation about having security professionals thinking like developers. Bill, with your wealth of information and education, have you found yourself thinking like a developer at times when setting up architecture and mitigating risks, or analyzing organizations?
Speaker 5:um, yeah, so. So I have increasingly, especially as we sort of pivot, especially for smaller organizations. We pivot into the cloud, where everything is tool driven. And you know, as I, as I work with developers who come to me saying, hey, you know, give me some insight for what I need to do to to secure my code, I find myself wishing I thought more like a developer.
Speaker 5:But that does lead to a question, though, that I had for Francis, which is, I mean. So in your questions, or rather in your answers, you have suggested that developers have an overreliance on tools, and it sort of reminds me where, you know, engineers got really used to the GUI and they kind of lost control of the command line, really digging down deep, and so it seems like a similar thing might be happening. And as these smaller companies in particular move into the cloud and they really rely on tools and these cloud-enabled levers that they have that they can pull, are they really losing the grasp on these underlying development security principles? And what challenges do you see present now in the public cloud that we may or may not be prepared for, and what can we do to get ahead of?
Speaker 2:them. Yeah, I'll answer your last question first, just because it's top of mind. So we see a lot of, whether it's on the cloud infrastructure side, but also the development side. That reliance that I talked about earlier is a result of this move towards microservices architecture, so containerized applications and using that way of segmenting applications, typically for resiliency. But because of all these microservices, we have various tools and services and identities talking to each other for performance.
Speaker 2:So when you look at your typical microservices architecture, the permissioning structure or the identity structure to make all these things sing can be very complex and the way we get ourselves in trouble is well, it works. So let's not look under the hood to see what's talking to what. And what happens in that setup is you have either accidental or malicious capturing of either a service account or a developer's credentials developers' credentials and once you do that, you sort of have keys to the kingdom. Your ability to move laterally or your ability to get sensitive information or to cause resiliency events is fairly trivial once you get those permissions. So we're basically building bad permissions at scale without having the know-how or applying the engineering effort to really make sure that we're securing those identities at scale. So that's where I see the biggest risk in this space is this tools and technologies that we've used and cobbled together microservices architecture are so reliant on getting secrets and passwords and permissions to talk to each other, but we don't really have a way of managing that at scale. So that's the first piece To answer your earlier question developers want to do the right thing.
Speaker 2:In my experience, yes, they may look for a fast track to production, but if you're able to instrument controls in their world able to instrument controls in their world, meaning that you're going to give them code, programmatic way of codifying your security rules, not telling them to go read a document around the security policies they're able to implement that within their pipelines.
Speaker 2:And I feel that in a lot of organizations you hear developers say well, nobody told me how to do that. It's like well, there is a policy that says you have to develop software, develop applications securely, Like, yeah, I know that, but it's not been translated into my world. And that's where I think we have an opportunity, as governance professionals, to help them translate those policies into code, and there are various tools out there that are available to help you do that. And I'm seeing more standards that offer very bespoke software development, secure software development frameworks in this space, Things like the OWASP software assurance maturity model, OWASP, SAM, things like the NIST SSDF, the secure software development framework these are now speaking specifically to this problem and offering us governance professionals specific steps we can take to developers to help them secure their pipelines and with these frameworks, we're able to have a more coherent conversation around the gaps and the risk that exists in this space.
Speaker 1:How do you identify what is a threat before releasing the software? How are you testing things out? Are you identifying these threats because you've seen other software have problems, like we've seen show up in the real world where things break, are we kind of identifying them from what comes first, the chicken or the egg there?
Speaker 2:Yeah, I would say there are two approaches to that. There's a macro approach at the high level, where each use case or each new design element of your software or new software should go through some level of threat modeling to uncover the bad things that could happen. Honestly, a lot of organizations that I've worked with or that I know don't do this well at scale. They do it sometimes, but you have a lot of legacy applications that are running today that don't have that opportunity to go through a true threat modeling exercise. At a more micro level, at a more grassroots level, there is the opportunity to do something called an attack surface review. So where should we be concerned? What is being exposed publicly? What kind of identity provider are we using? Where do we have multiple accounts that haven't been changed in the last few years? We can do an attack surface type mapping to get a more pragmatic view of the areas that we should double click on.
Speaker 1:And a combination of those two things the attack surface analysis and the threat modeling exercise should give us some level of comfort that we're topping and tailing this risk identification piece, if that makes sense and please feel free to pull me out of the weeds if I'm going too deep, but I want to make this as surface level as possible so people understand what we're trying to solve for here now I'm curious about that because, um, the software, especially, you know, with our operating systems and stuff, gets updated so frequently because of security threats that they're identifying that a lot of the times seems to me that they're not really testing it and that the users are the beta and, um, they're kind of waiting to see what the feedback or what kind of support tickets come in and they're trying to roll out things as fast as they can.
Speaker 1:And we're kind of in a culture with our technology and with sales and marketing where you know the shiny new thing is what, what gets the most attention, and we're in an attention economy. So you know, updates all the time on your phone, updates all the time, and I've said this before on the podcast as a software engineer, there's times where I just have to skip updates because it will break third party plugins or things won't be talking to each other, even kind of came up with our CrowdStrike conversation about.
Speaker 1:you know, maybe don't push this update to everyone at the same time. Maybe maybe choose a segment and and see how that that floats with with the smaller group before rolling it out to everyone. So that's kind of where my, my, my inquisition was coming from there.
Speaker 2:Yeah, and so you're a hundred percent accurate and what I feel my looking into my crystal ball is there is now an acknowledgement from the legislators and governments that this is an issue that we have to push back on the software providers to take some responsibility for.
Speaker 2:If you look at the new Secure by Design initiative from CISA, they're basically saying release secure software, don't have insecure software, and hardening guides for your consumers to harden the software, actually harden it upon release, and they're pushing that responsibility back on the software producers to actually be responsible for the secure use of their software. We're also seeing that in the EU we have the new DORA, the Resilience Act, the Digital Operations Resilience Act and the NIS2, these are new legislations coming out in the EU over the next six months that again make the emphasis on the software provider or the digital product provider to have some accountability for the security of their product and their software, and what I hope is that these like GDPR and PCI DSS. It creates a groundswell of acknowledgement that you can't just keep releasing insecure software. You have to take some level of accountability for the use of your software securely, and I hope that it sort of changes the current experience that you described.
Speaker 1:Bill and Nick. Do you think that would help kind of make your job easier for lack of a better word, to have that onus on the software developers? So what are your thoughts on that? Does that kind of, does that feel like it would shore up some of the risk that you're seeing pop up day to day?
Speaker 3:I think the question is twofold or the answer is twofold.
Speaker 3:You know it's going to obviously take a lot more time and development and a lot more cost, you know, to complete that mission you're describing. But from wearing a security hat, obviously we would want things to come out of the box ready, and I think we're seeing it across the board from everybody. For example and this is where I was going to go with my question before take AI or take Apple, for example. Their new iPhones just came out last week, releasing today, but new software isn't coming out to further Apple intelligence until next month. You know, maybe not directly, you know the same thing, but they're promising software before for a product that hasn't heard, that product that's coming out, for a promise of software that's coming out later, and I think that kind of is along the same lines of what you're talking about, josh. Right there, they're giving you a product that's maybe not fully baked or fully secure and they're leaving it up to security professionals like all of us sitting here, to make sure that we're good stewards of data for whatever organization we're working for at the time.
Speaker 3:So that's kind of a broad answer, but I think drilling down sure, yeah. I mean, in a perfect world everything would be ready, but then I could also say I might not have a job.
Speaker 1:So it's kind of we're walking the line. I don't know, because it seems to me that you guys have so many things coming at you every day that if that was kind of taken care of, you could focus more on, you know, training and cultural things and shoring up risk in other areas that are often neglected, because you're always trying to put out these little fires and, bill, maybe you could speak to that. Do you have a take on that?
Speaker 5:You know, it reminds me of a story I was talking with the head of development. It's been a while ago now. He was very proud of saying that he doesn't put any firewalls in front of his products because it makes his developers more conscientious of security. Now, I would not advocate that approach. There's sort of something to be said just for the attitude. However, I think even if your developers are doing a great job and they're doing secure by design and they're following OWASP and they're doing everything they can, you still have to have that defense in depth Right, Because there are just millions and millions of lines of code across all these interconnected products. You're never going to get it 100 percent right. There's always going to be a gap someplace. So I feel like the security personnel will always have a role to play next to developers providing that defense in depth, providing compensating controls, and that'll just be the way it is.
Speaker 2:And, if you don't mind, joshua, just to add to the last two points that Bill and Nick made.
Speaker 2:So, yes, if we're able to operationalize a lot of the things we've talked about today as sort of proactive efforts to secure in software, it allows us to get better at the reactive incident management and breach management to Bill's points because I don't know anyone that is really great at doing that, is really great at doing that.
Speaker 2:So if we can put out these fires that we have to worry about as part of the development process, we can spend that time building out better incidents and recovery processes because we need it. And then, going back to a point that Nick made and this is just a slight pushback I don't believe that building more secure back, I don't believe that building more secure, resilient software is a significant increase in cost or time, because we can engineer those practices to be part of the development lifecycle, not separate line items or separate efforts. And I want organizations to embrace that message because there is that perception oh, we have to spend more money on security controls or governance controls. I don't believe it's significant enough to not do it or to kick the can down the road. There are ways to making it simplistic. There are ways to making it part of the development lifecycle, so it's not something that slows down the development lifecycle.
Speaker 3:Yeah, that quickly you've changed my mind, I would agree. Going back to my project management discussion, I was saying before, having those plans and processes in place is going to probably get you a much better, secure package right out of the gate. Then also, like you said, probably not add to the time or cost if you budget appropriately. So, yeah, great point.
Speaker 1:We talked a little bit about tools today. I'm curious, francis with the rise of AI, assisted development, generative AI have you seen any of that come into the space to help shore up the security of coding? Or it seems to me that AI could easily go through those millions of lines of codes that Bill just mentioned and maybe point out some weaknesses, but I'm curious to see what tools that are coming online for that.
Speaker 2:I'll answer the question, but I want to make a distinction before I answer the question. So I make the distinction in my discussions between secure coding and secure software development, between secure coding and secure software development. There are great tools today from an AI standpoint that are able to give you initial feedback or immediate feedback around the insecurity of the code you've written. But what we're seeing, as I mentioned earlier, is cogeneration or coding is just, it's becoming a smaller part of the entire software engineering effort. When we talk about again the microservices, architecture, the development, your build pipelines, your cic testing, all those parts of your software factory need some level of security attention and I've not seen yet that level of ai improvements in the rest of the software factory. So we've shifted left in terms of securing the code or getting more secure code or initial feedback, but we also need to apply that same level of engineering and automation for better security throughout the rest of the lifecycle.
Speaker 3:I think the portion of AI that I was going to bring up before and maybe it's a non-issue, but I guess, for instance, I was curious on you and Bill were talking about engineers relying on tools Do you think this is the next tool that developers are going to rely on? That AI will cover them.
Speaker 2:Yeah, it's a great point. It's a great point because what you're essentially doing by getting things like co-pilots and all these co-generation tools, is potentially creating a bottleneck for the rest of your lifecycle because they still have to go through security testing and functionality testing and all the different things they have to go before release. So if you have all these junior developers and even senior developers now pushing so much code throughout the rest of the software factory, you're actually unintentionally slowing down the process using something that you thought would sort of help you with velocity and help you with productivity. And then, going back to your earlier point as well, we're now back to an over-reliance on a tool or something to do part of our role, without true understanding of what it's spitting out. So if we just take whatever a co-pilot spits out for code and push it through without understanding exactly what it's doing, then we're back to where we started, which is not really knowing the context and the business logic of our applications, which can make incident response and recovery more painful than it is today.
Speaker 3:You went exactly where I was going to go. Then we're going to get back to where we didn't know what was going on within the application, and then, like you said, at that point we're opening ourselves up for more risks. We're not quite sure, so we're back to playing defense versus offense.
Speaker 5:Yeah, you know, and I've read. I hope I don't get the name of the college wrong. If I do, I apologize, but it was Stanford, or maybe it was Cambridge. One of the notable schools released a paper several months ago that concluded that code that was co-developed with AI was less secure than code that was developed by people, and their conclusion was that AI just doesn't yet understand the nuances of application and development security, security. So do you think that, I guess, would you first of all agree with that conclusion and do you see, or at least hope for, changes in AI, that in AI engines that will help help, at least assist developers, with making more secure?
Speaker 2:Yeah, um, I haven't seen. I haven't seen that paper. I understand where it's coming from. What I would say is if that was true three months ago, it's probably less true today because of the way AI is sort of improving over time in a very short period of time, repeating myself a little bit.
Speaker 2:But I still think that the secure coding parts, or developing secure code, is still a smaller part of the equation that we're trying to solve here. And even if AI starts to develop more secure code the permission structure that we talked about earlier, the identities, the fact that no matter how secure your code is, if I can fish the credentials of a developer and get their credentials, then the security of my code of the sort of the robustness of my code doesn't prevent that level of attack. So I still think that AI is helping from a productivity standpoint and the security of the code that AI is producing is improving over time. But let's not focus too much on that specific part of the development lifecycle. Let's look at it holistically and look at all the other threat vectors that could impact the security of our software, including the security of the code, but, more importantly, the rest of the pipeline.
Speaker 1:Yeah. So, francis, can you tell us a little bit about your new business venture and what you've started here and what you're working on now?
Speaker 2:Yeah. So, as I mentioned, I've been in this space for a while now and what we've seen over the last again five years is that companies need better education, both on the security side and obviously on the developer side, to build more secure, reliable software. And we see that there is a gap in that level of education in terms of people that can speak to both sides, and we have a team of people that can have those conversations and know the right tools and levers to pull to get better, secure software and more reliable software. And we are seeing an increase in theISA people that have to comply with the new SBOM attestation. How do we make it more efficient? How do we make it less costly? How do we make it frictionless? So it's not an additional burden on your develop development team. So we're having those conversations with folks and helping them just be better, build better, more secure software.
Speaker 1:And what's the name of your company?
Speaker 2:DevSecFlow. So we believe sort of that whole DevSecOps movement is definitely something that we believe in, but the part that is missing is sort of that workflow aspect Getting a true workflow from development to operations, which includes security. How do we make that a seamless process? Excellent.
Speaker 1:Well, thanks so much for joining us today. You've been listening to Francis Ofungwu from DevSecFlow. We've also been joined by Nick Mellom and Bill Harris from IT Audit Labs. My name is Joshua Schmidt, your co-host and producer. You've been listening to the Audit and thanks so much for joining us today, francis. Thanks for your time. It's been a really interesting conversation. I hope we'll stay in touch, maybe on LinkedIn, and we'd like to track what you are up to over the course of the next couple of years here. So don't be a stranger, and thanks again for joining us. Thank you for having me Appreciate it.
Speaker 4:You have been listening to the Audit presented by IT Audit Labs. We are experts at assessing risk and compliance, while providing administrative and technical controls. Thank you, organization. Thanks to our devoted listeners and followers, as well as our producer, joshua J Schmidt, and our audio video editor, cameron Hill, you can stay up to date on the latest cybersecurity topics by giving us a like and a follow on our socials and subscribing to this podcast on Apple, spotify or wherever you source your security content.