This week we announced Investigations, new functionality that enables cloud users to maintain a dedicated database just for security data without the cost or burden of having to set up an actual database. In a nutshell, Capsule8 Protect’s Sensors can ship investigations event data as Apache Parquet to Amazon S3 Buckets or Google Cloud Storage. From there, the data can be used effectively by Amazon’s Athena or Google’s BigQuery so security teams can quickly figure out what happened in an incident, figure out why it happened, and refine automated response actions to prevent it in the future.
To read a bit more about Investigations, you can check out our press release. In the meantime, we’d like to show you how it can be used with AWS Athena in a basic attack scenario.
TL;DR (Show me the Videos!)
For those who prefer being shown, not told, we have two videos just for you! .
A scintillating attack scenario we investigate:
Or, watch us send Capsule8’s investigations data to S3 as Apache Parquet files:
The Story through the Eyes of a User
It was a Wednesday, and I was on Slack looking at cat pictures in #random, when I suddenly got burst of summary alerts on Slack from one of our Capsule8-Sensors.
From our summary alert template, it looked to me like one of our Jenkins servers in our Kubernetes cluster within AWS was running a kernel exploit to escalate privileges, but Capsule8 killed it! That’s really strange. Why is someone running a kernel exploit on this cluster? Was an attacker still on the box?
Jenkins runs shell scripts, so it was possible that the interactive shell exploit could be a poorly configured job, or a false positive but this looked particularly suspicious. The kernel exploit alert makes this super suspicious. How do we clean this up or track it back to root cause are they still here? I need more information.
We are running an installation of Capsule8 that has one capsule8-sensor (think: data gatherer) per host in the cluster.
Luckily, the investigations features were enabled and were logging Parquet files to an S3 bucket. I ran the setup scripts so that Athena was configured to perform SQL queries on top of the bucket.
I quickly logged into AWS Athena and started looking for
shell_commands related to the alert. Was there someone running other commands on the box? One of the neat features of the Capsule8 Protect Sensor is that when it detects investigation events that are related to an alert, it automatically adds a unique identifier called an incident ID to each event — connecting the event to the alert. For example, by tying these events allows us to see which shell commands may be attacker activity on the Jenkins container post-exploitation.
I ran the following query to find the incident ID for the Interactive Shell Alert:
After getting the
cce768f6-c6bc-4688-b2d5-ff3c4e4477c0 I ran the following query to get all of the interactive shell commands that were run in that session:
This resulted in the following output:
From here I noticed that the Jenkins user was running a suspicious
wget to get an executable called
pwn from a website called
Was their shell really spawned from Jenkins? That makes sense, because this container should only be running Jenkins jobs. Googling the version of Jenkins, I discovered that it is vulnerable to CVE-2017-10000353 (eek). There are Metasploit modules for that, so even a script kiddie could take advantage of it. How did the attacker get here? These aren’t exposed to the internet? Maybe they came from the cluster?
nslookup I was able to find the Jenkins pod’s IP address:
Next we created a quick query of which program on which cluster and which hosts tried connecting to 100.71.160.171 on port 8080:
This generated the following results:
Oh, look — there’s a Metasploit instance running on host 172-20-45-220. Quick
kubectl get pods showed it running on that host (
ip-172-20-45-220), which was part of the Kubernetes cluster and also running in a container.
Checking the image in the pod, we see:
I quickly recognized the pod as a version of Metasploit used by Capsule8 for demos and testing. Since Metasploit shouldn’t be running on this Jenkins cluster, I killed the pod.
Now that I kicked them out, I needed to investigate whether any other hosts or containers contact
exploit.delivery? I used the following query to do so:
Luckily, it looks like no other resources contacted exploit.delivery.
But, because I learned that
exploit.delivery is a bad domain, looks like we need to setup better egress filtering. Looking at the whois information for
exploit.delivery domain, I discovered it appeared to belong to a fellow employee, Brandon. To make sure we’re not dealing with anyone from the internet we then checked did a quick scan of AWS VPC net flow logs into and out of the cluster and we see no connections from the internet. Time to go give Brandon a good talking to for running Black Hat demos on the cluster!
In future blog posts, we hope to cover more details about how this works and cover some of the other event types beyond privilege escalation. If you’re at Black Hat and would like a demo, please contact us and we’d be happy to set up a time with you.
Pete Markowsky has been involved with information security and application development since first working with Northeastern University in 2001. He has worked across the security industry from .edu to .mil in roles such as development, security engineer, risk analyst and principal security researcher. Most recently he worked in security operations at Google. His security research areas have been focused on defensible and attacking such systems.