The concept of the cybersecurity manager is evolving, as the role shifts from the traditional “gatekeeper” to a more universal, company-wide security facilitator. Zane Lackey, our guest this month, is one of the most important white hat hackers in the world, and author of books such as Mobile Application Security and Hacking Exposed: Web 2.0. Currently, Lackey is the co-founder and CSO of Signal Sciences, a web application protection platform, and is also a member of the Advisory Board of the Internet Bug Bounty Program and the Open Technology Fund.
Although new infrastructures, services, and applications are being created, such simple things as security failures at the endpoint or a lack of two-factor authentication systems continue to be the cause of the global attacks making headlines.
We began the interview by recalling Zane’s days as a white hat hacker.
Panda Security: What techniques do you use to detect a vulnerability and expose a threat to avoid an attack?
Going back to my pentesting days, which was quite a while ago at this point, the most common things I would look for were the assumptions made in the design of the system. Then I would look for ways those assumptions might be violated. On the defensive side, I took that mindset thinking about how to empower development teams and DevOps teams. That was one of the biggest lessons learned for me — going from a white hat, security consulting, pentesting kind of thing over to becoming a CISO and building a security organization, is really focused on how to give the engineering team as much visibility into what’s going on in production as possible.
PS: How do programs like Internet Bug Bounty help to resolve vulnerabilities that have been discovered? After a flaw is discovered, how do you act?
I know there have been some changes in the Bug Bounty program recently, so I don’t want to say anything that would be incorrect there, but I think that from having run multiple Bug Bounties in the past, the important thing is trying to establish good communication with the researchers that come in. Because a lot of times, you’ll get a report that is partial or doesn’t contain all the info that is needed to reproduce the issue. So being able to say, “Hey, these are the five bits of information that we need so we can take this to the relevant service team or application team”, can help communication on both sides. And at the same time, trying to communicate back to the researchers so it’s not just a black box for them. Trying to be as transparent as possible on both sides — that’s what really leads to a good Bug Bounty experience, both for the researchers and for the organizations that actually work with them.
I think anyone who’s run a Bug Bounty program gets used to seeing all kinds of things. You see everything from systems that you didn’t know about, to pretty much every type of vulnerability, even ones that you don’t think that you have. So I really strongly believe in the value of these programs, and I think they complement pentesting very well. Combining the two can really help most security programs out there. The reason I like Bug Bounty programs so much in combination with pentests is because it allows you to focus your pentests on very specific areas rather than trying to have them test everything when they don’t have time for that. So you can use your bug bounties to try and get very wide coverage, and you can use your pentests to try and get very focused and specific coverage.
PS: The NHS has recently hired white hat hackers to identify cyberthreats. Do you believe ethical hackers are indispensable in today’s organizations to avoid breaches and strengthen defense?
For every organization, you need to be thinking about how people actually attack your systems. So white hat hackers, and pentesting, and bug bounties, those are all a piece of it. They’re not the full story, but they’re a piece of it. You don’t want to be doing security just for compliance, or just trying to check the box of different defenses to put in place. I challenge folks to have the number one thing that they’re thinking about as they’re trying to build a security program be: how would an attacker actually attack my organization? And really use that to drive the defensive programs that you put in place. And that’s where red teaming, white hat hackers, bug bounties, and all these ways to test your system can be a very powerful feedback loop. Because they can show, when your systems are being attacked, “this is where they went.” And that can focus your defenses.
So I really strongly believe in balancing offense and defense and using one to guide the other, and not just trying to do one in isolation.
PS: How can you implement DevOps to make companies safer?
I truly believe that embracing DevOps and embracing Cloud can make you safer. The reason for that is, in any development methodology, you’re still going to have vulnerabilities. So as soon as you recognize that fact, the logical conclususion is that the development technology that will allowyou to react the fastest is the one that can make you safest. In the old model of waterfall and changing applications very slowly, the problem was there was no way to react quickly. So this is why DevOps, Cloud, and the shift to Agility can actually make us safer.
PS: What can we learn from massive data breaches like Equifax, which happened via a web application vulnerability?
I’d say there are two things to learn from the breaches that we see every day. One is that, 99% of the time, they are the completely common, off-the-shelf things — its things that weren’t patched, it’s a weak password, its malware on an endpoint, etc. So going back to a previous comment, I would encourage all organizations to not think about the “insane, state-sponsored zero-day that’s crazy complex”, but rather to focus on the basics: how do you get coverage over malware on your endpoints? How do you get two-factor authentication on all your accounts? And how do you get coverage over the web application layer?
Because I think the other lesson that we’re all just starting to see in terms of the breaches but which we’ve been seeing in the trenches the last few years, is that historically the security risk was at the infrastructure layer and the network layer, so we always thought firewalls and IDSs and things like that could mitigate it. But over the last several years the risk has all moved up to the application layer and out to the endpoint. So learning where your risk actually sits is the number one lessor we should be learning as an industry right now, across the breaches that we’ve been seeing.
PS: Do you think companies will be ready for the GDPR? What will they need to do to be compliant and protect their data?
With any new compliance regime, there’s a lot of concern with it up front because no one is exactly certain what it looks like yet. So I think it will be a little fuzzy at first, then you’ll see products and services emerge to help with it and you’ll see a much clearer picture of what the auditors are cctually looking for and what steps really need to be taken as part of that.
Security and compliance are two separate things that sometimes overlap in small pieces. So defending your data, and not just being compliant with something, you have to ask: how do I defend my endpoints? How do I defend my web applications and my APIs and other things at the application layer? Because those two buckets are where so much of my risk is. So you should focus on getting visibility into those, getting effective controls into place around malware on the endpoints, two factor authentication for as many services as you can put it on, and then getting coverage and visibility and protection for your application layer.
PS: In terms of application security, do you prefer security by programming from within, or do you prefer protecting it from the outside?
The answer is both. For defending applications, how you do that effectively is you think about how to eliminate as many bugs as possible during the development cycle, but at the same time you recognize that there will always be vulnerabilities. So you couple that up with getting visibility and defense into the code that’s actually in production, and not just try to scan for bugs once it goes out and then just ignoring it once it’s out there live on the Internet. I think that’s been a major failing of the SDLC for the past 10 plus years.
The biggest piece of commonality I see amongst organizations that are doing this well is that they try to eliminate bugs before production, they recognize that there will always be vulnerabiliities, so they are really investing very heavily in getting visibility into how those services are being attacked in production and using that to bring that visibility directly to the development teams and the DevOps teams themselves, so that they can self serve with that information and not have to rely on the security teams to defend the services that they’re building.