The Road to Fail is Paved with Good Intentions
Security vulnerabilities are often a top concern for security teams. But when it comes to defending production systems, it’s not about bugs. There are a number of seemingly innocent developer behaviors that can wreak as much, if not more, havoc — or even worse, take an entire system down. These developers aren’t malicious, and they don’t intend to bork entire production environments. Developers are human, and it’s understandable that it can be tempting to find shortcuts or skip steps.
Most apps aren’t just a single unit, but instead involve coordination of different components and services to execute the desired functionality. . As a result, application development can also use a lot of different libraries and APIs. Frontends affect backends and backends affect frontends. The growth of cloud services means that more apps than ever before are being developed under DevOps principles, which can encourage the implementation of a lot of containerization. Everyone is just trying to do their jobs.
Unfortunately, the consequences of rogue or unwanted developer behavior can be disastrous inside of a production environment. If there aren’t policies that describe what is acceptable, developers are likely to perform all of these behaviours at some point, and still may even with a policy forbidding it. That’s why monitoring for and recovering from unwanted activity is so important. Here are three common behaviors by rogue developers that you need to keep an eye on:
Developers debugging in production
Remote debugging features make debugging in production really tempting. And it’s easy to assume that debugging as soon as possible would prevent future headaches. Unfortunately, debugging in production can create major availability and performance issues, an absolute no-go for production.
Grzegorz Mirek explains one reason why it’s not a good idea, “Most of our business applications handle many requests per second. There is no easy way to control breakpoints firing everywhere when your application is being remotely debugged. As you can imagine, we don’t want to block all of our users from using our application when we decided to debug it. More often than not, we also can’t just force our application to reproduce the bug which happened yesterday; sometimes the only way to do it is to wait until it happens again to one of our users. Thus, keeping a remote debug session in production without a strict control of how breakpoints fire is like putting landmines in the forest and inviting our users to run through it.”
Inviting users to run through landmines in exchange for developers finding bugs more easily is a costly tradeoff. Compounding this is the fact that debugging in production can also provide detailed information on both the system running the application and its users that can be used for future attacks.
Deploying tracing, monitoring, and performance analysis tools for production systems offers a less destructive alternative to debugging. At the very least, , applications should only be debugged in test or staging environments. For software engineers, it can be tempting to debug an application that’s in production, but it is too dangerous not to avoid at all costs.
Surprise deployments or deployment before review
In an organization leveraging DevOps practices or any agile development environment, it can be tempting to deploy code before it passes required security reviews if software engineering teams are trying to move quickly or avoid the dreaded bottleneck of security. But as we’ve seen with countless surprise party entrances gone wrong, not everyone loves surprises — and it’s an especially bad idea when it comes to delivering software in production environments.
A common developer behaviour is to “roll forward” when some planned deployment fails in prod. This is often due to a minor mistake that is straightforward to fix, but wasn’t caught in staging or pre-production. Teams are often under pressure to ship features on time, the 90-90 rule suggests they’re likely to already be late, and it’s incredibly tempting to make the tiny hotfix and redeploy instead of issuing a rollback.
Plenty of mistakes can be made when using APIs or libraries. Inefficient memory management can adversely impact speed or worse, lead to application crashes and costly downtime. Simple, yet disastrous, syntax errors can be introduced. This is why code review to ensure safety both performance- and security-wise should be part of CI/CD pipelines, and surprise deployments are a no-go in production.
Some of the most disastrous surprise deployments are not of code, but of configuration. Bucket not accessible? Make it public! Database connectivity error? Open all the ports! Aside from security issues, manual configuration changes are responsible for a surprising amount of downtime events, both at the time of manual configuration and as a landmine for later automated deployments.
But security teams, take note: implementing heavy change approval processes in the form of onerous, opaque security reviews hinders the acceleration of software delivery that DevOps practices can bring. When the real policy is too onerous, a shadow policy of accepted behaviour emerges. If you aren’t working with software delivery workflows, don’t be surprised when your developers surprise you with deployments.
Downloading and mishandling sensitive data
Sensitive data can be anything from authentication credentials to credit card numbers, to private keys and machine identities, to all manner of PII. If you mishandle sensitive data, you could be exposing it to attackers — or causing compliance violations for your organization. Practicing the principle of least privilege in application design can help reduce the impact of engineers improperly accessing sensitive data. No software process, machine, application, or human user should have access to any sensitive data that they don’t absolutely need.
Debugging production services, beyond the dangers discussed above, can also lead to mishandling of sensitive data. To analyze why a service is misbehaving, software engineers will put a service into debug mode — which can often result in personal information, passwords, or other sensitive data being written to application logs. In a similar vein, common application security problems can jeopardize data security in production, like putting information in error messages that can inform application compromise or passing secrets in plaintext via URLs in APIs that can facilitate account takeover.
Data analysis involving production data can also pose serious concerns. If appropriate data pipelines aren’t in place, data analysts will open up a tunnel from production to another environment so they can perform analysis on business data. This often occurs in response to ad hoc queries from senior management, but predictably ends with someone mishandling data — or worse, accidentally modifying production data and causing outages.
Bad habits don’t mean bad developers. There are malicious developers with their own set of behaviors you need to watch for, but we’ll cover that in another post. Meanwhile, there are well-intentioned developer behaviors you need to monitor for to avoid impacting the speed and stability of your production environment.
Kim has been researching and writing about all facets of cybersecurity for years. She’s worked for many tech companies and publications, such as BlackBerry, AT&T Cybersecurity, Venafi, Tripwire, Sophos, 2600 Magazine, Infosecurity Magazine, and many others. Her perspective is the big picture of all matters infosec. Having appeared in the first volume of Tribe of Hackers, she will soon make her paperback debut. The Pentester Blueprint, a collaboration with offensive security thought leader Phillip Wylie, will be published by Wiley in December 2020.
In her spare time, Kim loves playing Japanese RPGs (especially the Persona series), conducting culinary experiments, reading rockstar biographies, and falling down Wikipedia and TVTropes rabbit holes.