Software developers have become the masters of the digital universe.
Related: GraphQL APIs pose new risks
Companies in the throes of digital transformation are in hot pursuit of agile software and this has elevated developers to the top of the food chain in computing.
There is an argument to be made that agility-minded developers, in fact, are in a terrific position to champion the rearchitecting of Enterprise security that’s sure to play out over the next few years — much more so than methodical, status-quo-minded security engineers.
With Black Hat USA 2021 reconvening in Las Vegas this week, I had a deep discussion about this with Himanshu Dwivedi, founder and chief executive officer, and Doug Dooley, chief operating officer, of Data Theorem, a Palo Alto, CA-based supplier of a SaaS security platform to help companies secure their APIs and modern applications.
LW: Bad actors today are seeking out APIs that they can manipulate, and then they follow the data flow to a weakly protected asset. Can you frame how we got here?
Dwivedi: So 20 years ago, as a hacker, I’d go see where a company registered its IP. I’d do an ARIN Whois look-up. I’d profile their network and build an attack tree. Fast forward 20 years and everything is in the cloud. Everything is in Amazon Web Services, Google Cloud Platform or Microsoft Azure and I can’t tell where anything is hosted based solely on IP registration.
So as a hacker today, I’m no longer looking for a cross-site scripting issue of some website since I can only attack one person at a time with that. I’m looking at the client, which could be an IoT device, or a mobile app or a single page web app (SPA) or it could be an API.
A full stack attack starts by looking at a client and testing the client to learn what it can tell me about the backend servers. And the server is not a traditional server; it’s often an API running on Lambda or some other cloud service. So now I have this IoT hardware that’s talking to a server over an API running on Lambda – boom I’ve got my full stack attack surface: hardware, software, API and cloud all within a single attack.
LW: Can you give us more color on how APIs factor in?
Dooley: This whole idea of full stack hacking came from the idea of a full-stack engineer, which came about because of the advent of DevOps and the agile software development process. You could no longer just write code and then have other people run it for you; and you couldn’t have the people who ran code not know how to update the code.
So those teams got smashed together in this cultural phenomenon of DevOps, and these full-stack engineers started popping up. So the developers now began architecting the whole system, the full stack. And as a result, the attackers realized they could pull off a full stack attack by going in through the application’s network layer, represented by APIs. And then they could back their way into the microservices that are ephemerally spinning up and spinning down in the cloud. So this is where we’re seeing the most innovative attacks that are creating the biggest headlines.
LW: How does the endless spinning up of new APIs and API updates contribute?
Dooley: What happens is as developers constantly push innovation into these apps . . . that newest piece of software that’s just been added to the app is unlikely to have gone through hard-core security vetting and reliability testing. So if you’re the attacker, you’re going to go after this weakest link. Security has always been a weakest-link driven system
LW: What does so-called “attack surface management” entail?
Dwivedi: It comes down to very basic blocking and tackling. But turns out the basics are quite hard when you have a moving target. Twenty years ago, you knew you had 1,500 servers because you paid the bill for them. Today, at 3 o’clock in the afternoon (peak) you might have 1,700 servers and at 3’clock in the morning (off peak) you might have just three servers.
Things are ephemeral – they’re there and then they’re gone. It’s hard to know what your inventory (APIs, compute, storage, availability zones, etc) is at any given time. So blocking and tackling is figuring that out, and then once you know what your inventory is you need to auto update, auto test and auto secure.
That’s what attack surface management really is; it’s getting your inventory management together and then being able to understand, automatically, whether it’s vulnerable to basic attacks. A human analyst can’t be involved because there aren’t enough humans to track all of the assets that you may, or may not, know about.
LW: Going forward, things are only going to get more complex.
Dooley: Yes and it is extremely difficult to take these static constructs, like firewalls or like operating system agents, which worked well in static on-premise datacenters, and use them today. These have been two tried and true security tools and techniques that we’ve been using for about 20 years because the data center was typically pretty static.
But your inventory is never going to be static in the public cloud. It’s a moving target; it’s hyper dynamic. And if your inventory is dynamic, at any given moment, your attack surface can be different. The industry needs tools and techniques that work with that kind of architecture and in that kind of model.
The good news is there are a lot of innovative companies and teams who are now evolving these security techniques, and these runtime protection innovations, to deal with this hyper dynamic environment that’s being driven by cloud native apps.
LW: Where are we now, and where are things headed in the next few years?
Dwivedi: Active protection is something we’re almost ready for. I think developers are ready for it, but security teams are probably a couple of years away. Developers are starting to say, ‘Hey listen, we can’t have our APIs leaking data, we need to secure them automatically,’ Whereas there’s a little bit more analysis paralysis in our security community, of which I’m part of.
The developers want to automate; they’re saying, ‘If we can auto fix, let’s do it and we’ll correct for any side effects as we go on.’ Developers can see that it is automation that’s going to get them to the promise land. Security wants to look at the logs, and then evaluate the logs, and then have a meeting, and then decide by committee what to fix or not fix. Security is less prepared because we tend to be more analytical and less automated.
So that’s why I think it is the developers who are ready for auto protection. But our security community will probably still be running in more of a passive mode for the next couple of years — until they see that a breach could’ve been stopped if active protection had been in place.
LW: So what are the main components of active protection?
Dooley: There are certain pillars of security that we always think about: authentication, authorization, encryption, and auditing. And so, the reason why these four pillars always come up as foundational security practices is because they can be applied everywhere.
It is vital to know who is getting access to what and when; and to have a tracking mechanism that applies privileged access policy to assets, applications, and data; and to make sure there is a record of each connection that can’t be tampered with.
Everyone in information security should know that if you can’t maintain these very foundational elements of security then you really don’t have a security program. But if you can always enforce these four pillars, no matter how dynamic the nature of your APIs and cloud services are, and no matter how dynamic your client layer is, then you’re doing extremely well. Active protection means you’re able to get these four pillars of security actively in place in a very dynamic environment.
Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.
(LW provides consulting services to the vendors we cover.)