Subscribe to Email Updates

Organizations are unearthing the potential of digital transformation, but security often remains a gatekeeper to this path of promised potential, largely due to its own delusions about what modern infrastructure means. As Herman Melville wrote in Moby Dick, “Ignorance is the parent of fear” – and security is too frequently hindered by its fear of the new and the agile precisely because of its ignorance about blossoming technologies.

In this blog series, drawn from my QCon talk last year, I will explore the history of infosec’s errant gatekeeping in the face of new technologies, and how we can encourage security to embrace new technologies to enable the business, rather than get in its own way. You can read part one here.

Now that we explored infosec’s history of cloud compunction, we can turn to the new looming beast for security teams to face: microservices. 

This darkling terror security harbors in its heart is that microservices creates a titanic, labyrinthian attack surface. It is as if they believe that each microservice adds the same attack surface as a traditional monolithic application – and thus with thousands of microservices, the attack surface of the monolith days is multiplied by a thousand as well. Through this lens, it is understandable why microservices would be absolutely terrifying – but this mental model is, of course, wildly mistaken.

In this infosec Copernican Revolution, it is exceedingly difficult for security to let go of the perimeter model. Although proven false countless times, the pervading belief was and still often is that if the perimeter is secure, then the enterprise will be safe. This is an illusory history. Lateral movement was so pernicious because once attackers bypassed perimeter defenses, the only defense they encountered was #yolosec, giving them free reign over internal networks.

While security is lamenting the dissolution of the perimeter and the daunting monster that is microservices, they completely miss that microservices forces the purported dream security held for so long – that security would be baked-in rather than bolted-on. Because microservices are typically considered publicly-facing by default, no one can rest on the assumption that perimeter defenses can save them – thus turning native security controls into the necessary default rather than a nice-to-have.1

Let us now turn to two essential components of microservices environments to explore the security delusions about each individually: APIs and containers.

APIs: Infosec’s Anathema

In a November 2018 survey by Ping Identity on API security concerns2, 51% of respondents noted that they are not certain their security team knows about all the APIs in their enterprise’s network. Certainly, developers are now opening many API endpoints – but that does not differ from the prior mode of developers opening particular ports on an internal network. 30% of respondents said they do not know if their organization has experienced a security-related incident involving their APIs – and I suspect the 30% would not know whether they have been compromised whether it involves APIs or not.

CISOs are particularly fraught over the idea of public APIs – that they add attack surface, that they are so close to the grasp of attackers, that it is impossible for security to have control over all of them. As one security professional publicly opined, “Formerly, local networks had only a few connections to the outside world, and securing those endpoints was sufficient.”3 That, in fact, was never truly sufficient. This strategy resulted in local networks that were astonishingly brittle because of the assumption that network security controls would prevent anyone from gaining access. 

Infosec practitioners will cite related fears that APIs can provide a “roadmap” for underlying functionality of the application, and that this roadmap can aid attackers. These fears are, quite frankly, ridiculous. Any legitimate security expert will caution that the “security through obscurity” approach is a terrible idea. Hiding functionality does not make your app inherently secure or insecure. However, if infosec teams are concerned about this, there is a high degree of certainty that the app is not designed to be resilient – which is a failure of the infosec program.

As I advocated in my previous research on resilience in infosec, the only way to ensure resilient systems from a security perspective is to assume that your added security controls will fail. Specifically, I recommended treating any internal or private resources as public – because otherwise you will bask in a false sense of security when your controls are inevitably bypassed. It is eye-opening how few enterprise security teams traditionally treat their internal or private resources in this way, as if there was not extensive documentation of attackers bypassing network security tools.

Further, what security practitioners often do not realize is that standard OWASP-based attack tools (such as Burp or Nessus) do not work nearly as well on API endpoints, because there are no links to follow, no attack surface to map, unknown responses, and potentially no stack traces. What is more, for RESTful JSON APIs, whole classes of vulnerabilities around cross-site scripting (XSS), session management vulnerabilities, compromised cookies, or protecting tokens are removed through the use of digest authentication and JSON Web Tokens (JWT tokens). If anything, API-centric apps abate application security (appsec) concerns rather than aggravate them.

One of the performance benefits of a microservices approach is borne out of standardization – and standardization also begets security benefits. However, standardization is not a common, nor commonly understood, topic among enterprise infosec professionals. They still live in the tailored and monolithic universe, not grasping that there can be a singular, well-developed API deployment that can be replicated – thus reducing their work down to rigorously testing the single API deployment until they are comfortable with its security posture. Standardization is a prevalent factor in the world of containers, as well – and is one no less fraught with security concerns.

The Curse of Containers

This new world of public-facing API connections is not the only aspect of modern technology receiving condemnation and trepidation by enterprise information security – containers themselves are seen as quite a grave bouquet of threats. 

Not every infosec professional realizes that containers are not, in fact, featherweight virtual machines (VMs). Frequently asked questions, as noted by Mike Coleman, may include “How do I implement patch management for containers running in production” or “how do I backup a container running in production?” – questions that evince the lack of understanding of the nature of containers. They do not know that there is a separate data volume that is backed up and they do not know that you patch the container image instead of the actively running container. 

A recent survey by Tripwire4 incidentally exposes this confusion among information security professionals. 94% of respondents have concerns regarding container security – and this “lack of faith” has led 42% to delay or limit container adoption within their organization due to security concerns. The winning reason (54%) among respondents for their security concerns is inadequate container security knowledge among teams – and we should be grateful they are at least acknowledging that their lack of understanding is a contributing factor. 

Source: Tripwire

The remaining concerns include visibility into container security (52%), inability to assess risk in container images prior to deployment (43%), lack of tools to effectively secure containers (42%), and the most nebulous one: insufficient process to handle fundamental differences in securing containers (40%). I, for one, am deeply curious to know what they perceive these fundamental differences to be, given prior erroneous beliefs about cloud security.

To crystallize the confusion and anxiety, the survey results around infosec professionals’ desired security capabilities for containers are worth exploring, too. 52% quite reasonably desire incident detection and response – something we (Capsule8) provide. Another reasonable request, by 49% of respondents, is for isolation of containers behaving abnormally. Regrettably, 40% also want “AI security analytics” for containers, and 22% want blockchain to secure containers, so we can presume somewhere between 9% to 12% are sane, and at least 22% have absolutely no idea what they are doing. 

Source: Tripwire

Beyond survey data, a frequently suggested straw man by infosec is that each container requires its own monitoring, management, and securing, leading to time and effort requirements that spiral out of control. The whole point of containers is for them to be standardized, so such claims are directly ignoring the purpose of the technology. Yes, they need to be monitored – but were you not monitoring your existing technology?

A cited fear of standardization itself is that vulnerabilities can be replicated many times as source code is used repeatedly. This ignores the status quo. Testing containers is still monumentally better than having developers write random queries every time in different parts of the application stack. At least in a container, you can find the vulnerabilities easily and orchestrate a patch to all relevant containers. Good luck finding the vulnerability in a custom-built Java app with intricate functionality.

It is as if infosec forgot the trials and tribulations of dealing with monolithic applications, as they now will cite that “you know exactly where the bad guys are going to try to get in” because there was one service and a couple of ports. They apparently have not heard that “complexity is the enemy of security,” or have conveniently forgotten the mantra.

In a monolithic application, workflows are enormously complex, making it extremely difficult to understand every workflow within it – meaning it is nearly impossible to understand how workflows can be taken advantage of by attackers. Because microservices represent one workflow each and are standardized, they can be mapped out in an automated fashion, making threat models considerably easier. For instance, JSON mapping and Swagger are designed to describe exactly how APIs interact, and modern web appsec tools will ingest these maps to understand an app’s API endpoints.

Another vital, but overlooked, benefit of containers for security teams is immutability and ephemerality (as discussed in my Black Hat talk last year). An immutable container is one that cannot be changed after it is deployed — so attackers cannot modify it as it is running. An ephemeral container is one that dies after completing a specific task — leaving only a short window of opportunity for attackers to do their thing. Both characteristics embed security by design at the infrastructure level, and are far easier to implement with containers than with traditional monolithic applications. 

If you segregate identity and access management (IAM) roles in Amazon, containers can only talk to each other based on what you specify, removing network services from your systems. Any infosec professional pretending authentication between microservices is not easy is either lying or has not actually attempted to learn how to do it. The shared environment of containers, much like concerns infosec held over the shared environment of cloud, are a frequent fear as well. This, too, forgets history.

Before, your systems would talk over FTP, telnet, SSH, random UDP ports, port 80 talking to other things – but now, all that network mapping is removed because you are using TCP, authenticated APIs, and HTTP standards. Using containers, someone needs to pop a shell in (a.k.a. compromise) your web server infrastructure, whereas before, they could get in just through an FTP service running.

The update process for containers also concerns infosec practitioners – specifically, that it is still too easy for developers to use vulnerable versions of software. I ask: this is in contrast to what paradigm? When people were still using versions of Windows Server 2008 that were built with Metasploit backdoors ready to go? Software versioning is, was, and probably will always be an issue – containers or otherwise. Pretending this is a new issue is disingenuous. And containers present an opportunity in this regard — that you can ensure your software is compliant, secured, and patched before the workload even spins up.

In this modern world, you do have multiple services of which you must keep track, but you are also separating out complex functionality into separate services. With big, complicated applications, one of the key issues previously when moving from staging to production was needing to track every single place where you needed to remove, for instance, stack traces. If you deploy in a container-based environment, you have a build for stage, a build for production, and you can track exactly what the systems are, building the API on top of it. 

Conclusion

During this exploration of these “modern monsters,” we saw that the security industry’s present fear of microservices (both APIs and containers) do not match with their realistic threat model. Unlike the concerns over cloud computing, it seems security teams are less reticent to acknowledge that part of their hesitation is driven by a lack of understanding — and acknowledging the problem is a necessary first step on the path to recovery.

Unfortunately, security’s apprehension of microservices is also withholding opportunities for security teams to leverage microservices to improve organizational security. Promoting standardized APIs should reduce a whole host of security headaches — moving away from manual security reviews across knotted monoliths towards automated security that checks whether an API endpoint adheres to the defined standard. While containers are certainly not secure by default, they present an opportunity to scale security workflows — as well as raise the cost of attack through their ephemeral nature.

It is all well and good to document the anxieties of infosec teams, but what can we do to handle these concerns? In the final part of this series, I will dive into the cheat codes for dealing with all of this — including recommendations on best practices for securing modern infrastructure.

References

[1]: A caveat here is that typical internal microservices often will not use encryption because of certificate challenges that create friction for engineers. Yet, this is undesirable, and will certainly panic your security team if done.

[2]: Canner, B. Solutions Review. (2018, November 19). Ping Identity Releases Survey on the Perils of Enterprise APIs. Retrieved from https://solutionsreview.com/identity-management/ping-identity-releases-survey-on-the-perils-of-enterprise-apis/.

[3]: Because of their apparent predilection for espousing FUD, I am not naming them so as to not give them more attention.

[4]: Tripwire. (2019). Tripwire State of Container Security Report. Retrieved from https://3b6xlt3iddqmuq5vy2w0s5d3-wpengine.netdna-ssl.com/state-of-security/wp-content/uploads/sites/3/Tripwire-Dimensional-Research-State-of-Container-Security-Report.pdf.

Scroll to Top