Subscribe to Email Updates

Organizations are unearthing the potential of digital transformation, but security often remains a gatekeeper to this path of promised potential, largely due to its own delusions about what modern infrastructure means. As Herman Melville wrote in Moby Dick, “Ignorance is the parent of fear” – and security is too frequently hindered by its fear of the new and the agile precisely because of its ignorance about blossoming technologies.

In this blog series, drawn from my QCon talk last year, I will explore the history of infosec’s errant gatekeeping in the face of new technologies, and how we can encourage security to embrace new technologies to enable the business, rather than get in its own way. Part 1 and Part 2 are already published.

Now that we went through our history journey of infosec’s wariness towards cloud computing, and explored security’s present fears about microservices, how should we go forth in this muddled world? How can we evangelize real threat models and real solutions to security issues while prying traditional FUD-borne notions from enterprise infosec’s white-knuckled hands? In this final post of the series, I will detail the “cheat codes” for securing cloud and microservices environments and how to efficiently evangelize these best practices to security teams.

This discussion must start with how infosec perceives the engineers implementing all this newfangled, scary tech. Infosec tends to look at DevOps as reckless, overpowered frenemies rather than an ally who could teach them a thing or two about process improvement. As one security professional (who shall remain nameless) said, “DevOps is like a black hole to security teams because they have no idea what DevOps is doing and have no way of ensuring security policy is enforced.” The current conflict is ultimately about control – the fact that security is not exclusively gripping the wheel anymore. 

This means that engineers should be cautious when evangelizing cloud infrastructure, APIs, or containers to security folks. When someone is overwhelmed by fear, they will react quite poorly to being told to “calm down,” or that there is nothing to fear. Instead, engineers as well as infra-savvy security professionals must acknowledge that there are valid concerns borne from cloud or microservices environments — just not the ones commonly believed by the infosec industry. 

Cheat codes for cloud, APIs, and container security

What realistic concerns should be highlighted to replace the security delusions I covered in the first two parts of this series? Before we dig into specific best practices for clouds, APIs, and containers, there are three fundamental security tenants to remember for each category:

  1. Do not publicly expose your cloud storage buckets (AWS S3, Google Cloud Storage, Azure Storage).
  2. Do not use unauthenticated APIs.
  3. Do not use “god mode” in your containers – minimize access wherever possible.

The fortunate news is that there are established best practices for security for all the “super scary” technology – and these best practices should absolutely make infosec’s job easier. If anything, infosec takes on the role of evangelizing and enforcing best practices rather than implementing anything themselves.

IAM as the new perimeter

Analogizing security in cloud or microservices environments to the old, pre-Copernican ways (when the firewall was the center of the security universe) can help translate modern best practices into the language of traditional security professionals. Security groups and network isolation by CSPs are the firewall equivalent. Ingress and egress routes defined through AWS, GCP, or Azure are similar to firewall rules, letting you specify, “No, this resource can only talk to these systems.” It requires trust that the CSPs properly segregate resources, but again: it is a delusion to believe you can do so better than the CSPs.

Leverage your CSP’s tools

For cloud systems, making sure your AWS S3, Google Cloud Storage, or Azure Storage  buckets are not available to the public is the most valuable step you can take to avoid data leaks like Accenture and Time Warner’s. AWS offers a wealth of tools to help ensure best practices, including Amazon Inspector (looking for deviations from best practices), and AWS Trusted Advisor (provisioning resources using AWS best practices). 

Ensure the principle of least privilege

The CSP’s IAM roles can help ensure the principle of least privilege when accessing systems. Each provider has their best practices for IAM policies readily available, only a search away1. Segmenting production and development environments through maintaining separate AWS accounts for them is an alternative strategy. Instead of users, use Assumed Roles instead. This way, admins will log in as read-only users, and you can create keys with fine-grained permissions without needing a user with a password for each key or service account. 

API hygiene habits

Basic API hygiene will suffice for most organizations, consisting of authentication, validation, and the philosophy of not trusting external data. OWASP maintains a valuable “REST Security Cheat Sheet,” and its advice proves far simpler than the tangle of considerations for monolithic apps. For instance, sensitive data like API keys should not be exposed in the URL – instead, they should be exposed in the request body, request headers, or HTTP header (depending on the request type). Only HTTPS endpoints should be used, and there should be access control at each API endpoint. Apply allowlists of permitted HTTP methods for each endpoint.

Granular allowlisting in microservices

In the vein of API hygiene, ensure you validate input and content types. Do not trust input as a rule, so add constraints based on the type of input you are expecting. Analogize this to any traditional infoseccers as a form of granular allowlisting – previously impossible with monoliths, but now possible with microservices. Explicitly define what content types are intended and reject any requests with unintended content types in the header. This also engenders a performance benefit and is often part of API definition anyway – again, making the security team’s job much easier.

God is not a mode

For containers, the most prevalent “threat” is misconfiguration – just as it is for cloud and APIs. Much of the security best practice for containers is related to access management, a common theme across modern technologies. Do not expose your management dashboards publicly. Do not let internal microservices remain unencrypted – use of a service mesh can reduce friction when implementing encryption.

Crucially, do not allow “god mode” or anonymous access in your containers – and generally make your access roles as minimal as possible. Any CISO will be very familiar with the concept of least privilege already. Do not mount containers as root with access to the host. Disable your default service account token. Enforce access control on metadata. These amount to the new security “basics” in the modern era.

Your CI/CD is your new patch manager

Patching becomes palpably easier with containers – which can be argued as an antidote to the “Equifax problem,” in which procrastination due to the friction of taking systems out of production to patch them contributes to an incident. Continuous releasing means versions will be upgraded and patched more frequently – and container patching can be baked into CI/CD pipelines themselves. Any infosec team should be delighted to hear that containers let you patch continuously and automatically, removing them from the awkward position of requesting downtime for necessary security fixes.

Leverage containers for resilience and visibility

The fact that containers are managed through images in a registry removes work for security, too. The container image can be rolled out or rolled back, which should add a feeling of control for infosec teams. Further, visibility into which containers are affected by emerging vulnerabilities is much easier – container registries can be scanned to see which containers are vulnerable, instead of scanning production resources directly. And, live migration becomes possible through slowly moving traffic to new, healthy workloads from existing, vulnerable workloads, without any impact on the end user.

It will be hit or miss whether your organization’s security team really understands containers. You can try using the example of updating a Windows laptop to provide an analogy to live migrations. Usually, you have to shut down Word or PowerPoint and disrupt your work. Instead, imagine the Word document migrates to an updated OS in the background, followed by the PowerPoint presentation, until all the work is moved to the patched OS. Now, the unpatched OS can be safely restarted without interrupting work.

Codify secure configurations

It is critical for enterprise infosec teams to help codify secure configurations and enforce all of these best practices. This is the modern equivalent of crafting security policy templates2 (but less painful). Infosec teams can lead the charge in documenting threat models for standardized APIs, containers, and other resources. They should start with scenarios that would be most damaging to the business, such as customer data being leaked, data loss, disruption of service, then working backwards to the most likely avenues for attackers to accomplish those feats.

Prioritize protecting prized pets

Infosec teams should put additional effort into securing prized “pets” (vs. cattle), which are enticing to attackers and less standardized. As shown through the surveys mentioned in the prior post, visibility is one of the most coveted capabilities among enterprise infosec teams, and is crucial for protecting prized pets. However, the types of tools that could provide the right visibility for infosec teams are often already used by operations teams seeking to optimize performance. This is a propitious opportunity for security and DevOps to collaborate, with the benefit of sparing budget and integration work required by removing duplicate functionality.

Build your audit use cases

Hitting the right compliance boxes can encourage adoption of modern tech as well, since compliance is a consistent budget item. File integrity and access monitoring (known as “FIM” and “FAM”) is an underpinning of nearly every compliance standard, from PCI to HIPAA to SOX. FIM/FAM requires monitoring and logging of file events for a few different purposes, but primarily to catch unauthorized modification of sensitive data (a violation of data integrity) and to create audit trails of which users accessed sensitive data (to preserve data confidentiality).

Because of the improved inspectability of containers, FIM/FAM becomes easier – even without a tool like Capsule8, which does it for you. Because microservices are distilled into simpler components than in a monolithic application, it is easier to pinpoint where sensitive data is being handled, helping target monitoring efforts. Demonstrating the ease at which visibility is obtained can help assuage concerns about control. Note, however, that infosec professionals are less familiar with the term “observability,” so translation is required when collaborating.

Caveats and cautions

Each CISO and infosec team maintains different priorities and possesses different skills, so not every tactic here will necessarily be effective for every team. Some teams prioritize compliance work, others seek to rigorously define policy, and yet others are only familiar with maintaining network security equipment and SIEMs. Many enterprise infosec practitioners will be more proficient with Windows than Unix, think in a network-centric model, and rarely develop anything themselves. Therefore, patience, analogies, and proof that not all control is lost will be critical in gaining buy-in.

Conclusion

It is hard to let go of long-held beliefs, and the firewall-centric model in a well-understood world of monoliths is tricky to dislodge from the heart of enterprise information security. Many of infosec’s fears over modern technology can be distilled into fears over losing control. For those in DevOps functions looking to help infosec evolve — or security professionals wanting to help their teams enter the modern era — assuaging those fears by redirecting control from grasps at threat phantasms towards tangible, meaningful threat mitigation is an essential step forward.

Work together to build secure standards for APIs and containers, to document appropriate cloud configurations, and to create threat models that can help continuously refine design towards more secure outcomes. Enterprise infosec teams, freed of many maintenance burdens through native controls and standards, can now focus on securing the “pets” in this modern world. Security will no longer herd cats and cattle, but instead be an evangelizer and enforcer of best practices.

Everyone maintains delusions in one fashion or another, but I sincerely believe we are not bound to them like Andromeda chained to a rock in the stormy sea. Information security can survive this Copernican revolution of cloud and microservices, but they could use their Perseus to save them from their Cetus – the devouring fear fueled by the siren song from infosec vendors urging them to succumb to dread. My hope is my guidance throughout this series can help us unchain infosec, allowing them to go forth into a new dawn of secure and resilient software delivery performance.

References

[1]:  If you don’t feel like Googling for them, here are the links to each: Security Best Practices in AWS IAM; Using Google Cloud IAM securely; Azure identity & access security best practices

[2]:  These are often required by compliance, and most CISOs should have familiarity with them.

Scroll to Top