Detecting Exploits with Novel Hardware Performance Counters and ML Magic
The end of July usually comes with a bit more preparation involving updating your software, encrypting your devices, buying a burner phone, and so on, as the infosec community prepares to descend down onto the Las Vegas strip for Black Hat and Defcon. While the show is a little different this year because it is an all-virtual conference (a result of the world being a lot different this year because of an all-virus pandemic) we’re still eagerly anticipating some of the incredible research being presented at the show.
On Wednesday, August 5, two of Capsule8’s finest, Nick Gregory (Ghost), research scientist and Harini Kannan, data scientist, will be presenting, “Uncommon Sense: Detecting Exploits with Novel Hardware Performance Counters and ML Magic.”
The session focuses on the role of hardware performance counters (HPCs) as detectors for exploits. A performance counter is code that monitors or counts events in software and registers them with the operating system. Using those counts, you can often derive patterns to see what is going on, good or bad. The research into HPCs was initially sparked by Capsule8’s detection of Spectre and Meltdown back in 2018, and the team began to look a bit more into other exploits detected by HPCs. So far, only relatively simple and well-understood counters have been used, which not only limits the amount of information that can be gleaned from the system, but also provides an attacker with the opportunity to easily bypass known counter-based detection techniques with minimal changes.
Harini and Ghost want to “move beyond just scratching the surface of the HPC iceberg,” by uncovering previously overlooked/undocumented counters to help build up defenses against these types of attacks. The machine learning aspect is critical here, as the challenge became “What if we just try ALL of them?”
They began their journey by using the simplest models possible, from simple logistic regression, single layer perceptron to ensemble methods like random forests and gradient boosted trees, so that the model was as interpretable as possible. It was important to learn not only what models were doing well, but also what the models learned. Once the proper model was determined, and using the entire corpus of performance counters for commonly used baseline programs and behaviorally-similar malicious programs, they were able to zero in on which counters to use as features for their supervised classifiers.
During their talk on Wednesday, August 5th from 1:30pm-2:10pm PT, Ghost and Harini will showcase the results of this research, highlighting the uncommon and previously ignored performance counters that were lurking in the dark, with so much useful information.
If you haven’t registered for Black Hat yet, you can visit https://www.blackhat.com/us-20/ and if you do attend, be sure to stop by Capsule8’s virtual booth!