Do you ever wonder who is behind the cryptojacking attacks targeting the cloud? If you examine a compromised server you will notice multiple attackers creating a chaotic mess of cron jobs, services, processes, and network connections. You will see evidence of different entities attempting to grab a foothold on the victim system. This talk takes a look at the actors and their tactics behind this activity.
Cloud resources make a lucrative target for crypotjacking. To run a successful campaign an attacker must compromise servers and remain persistent long enough to turn a profit. To stay persistent the attacker must evade detection by the owners, typically by installing rootkits, adding multiple forms of persistence, and setting CPU limits to avoid alarms. Once this is complete mission accomplished right? Not quite.
As it turns out cryptojacking is so popular that many actors are competing for the same resources. This results in attackers booting out anyone else that gets in their way. As seen in malicious scripts and binaries, attackers scramble to keep up with other attacker TTPs all while managing infrastructure in hopes that it doesn’t get blacklisted.
This talk will discuss one of the first players to the game, the 8220 mining group, and how they target cloud-native technologies along with traditional applications. The very prolific group, Rocke, whose origins begin by forking an 8220 mining group github repo is examined along with their continually evolving tactics.. The talk also looks at Pacha, a group that adopts the tactics of their competitors while simultaneously disrupting their operations. Here you will learn about these groups and what they are likely to target. This talk is geared towards operators and incident responders who need to detect, prevent and remediate these attacks. It's also geared for those who are curious about what is happening behind the scenes and those who enjoy the quirks of attacker behavior.
OTX Cumulus is a centralized content repository containing attack scenarios, response runbooks and coverage validation steps which are inspired by projects like Atomic Red Team and Sigma. This content will be publicly available and open to community contribution.
With the rapid adoption of public cloud infrastructure, many companies lack dedicated cloud security employees with hands-on experience or the resources to build a cloud security program from the ground up. This can lead to two unfortunately common scenarios: buying and deploying a new product (or five) without the context or skills needed to understand, resolve, or communicate the findings; or being constrained by the use of existing "traditional" security capabilities to build new detections.
Even for experienced cloud professionals, threat content development is an intimidating process involving days of scouring API docs, code commits, and blog posts. Resolving findings can also be challenging for those who are new to the inner workings of the cloud -- how do you disable a compromised IAM account as well as any ongoing sessions, or quarantine infected infrastructure without destroying forensic data? Are there security events that you'd expect to be logged, but aren't?
We will discuss the threat discovery process, challenges related to implementing detections (especially in hybrid and multi-cloud environments), quirky event logging, and how Cumulus can be used to reduce time spent researching and creating detections or adding context to your existing tools and alerts.
During an incident, answers are needed quickly. Often this starts with evidence collection and log correlation. While companies generally have runbooks and standard operating procedures, this process tends to be manual, time consuming, and prone to human error. Companies have successfully automated on-prem evidence collection using both open-source and enterprise tools, but few have tackled this task in the cloud.
At Goldman Sachs, we have automated an event-driven cloud response platform that uses AWS native services to successfully collect disk and memory from potentially compromised EC2 instances. All actions are logged via CloudTrail or custom host-based logs, ensuring that Chain-of-Custody and evidence handling best practices are followed. The disk collection process is leveraged across three AWS organizations and used by over 3000 AWS accounts, while the memory collection process is nearing the final stages of approval to be rolled out firm-wide.
In this talk, we'll walk through how automated and manual findings from Security Hub are passed as inputs to Step Functions which orchestrate the collection of disk and memory. For disk, we'll deep-dive on how this process: 1) creates encrypted snapshots of impacted EBS volumes in the monitored account; 2) shares those snapshots back with the security account; and 3) spins up EC2 instances from a custom AMI that leverage open-source Linux tools such as dc3dd and incrond to stream dd images of those snapshots to S3. For memory, we'll discuss how this process leverages LiME or winpmem via AWS SSM to stream memory and the custom memory profile directly to S3. At the end of the collection process, Goldman Sachs’ Security Incident Response Team has access to a dd image of each volume attached to the compromised instance, as well as a full memory capture and custom memory profile.
Speaker: Brandon Sherman (Twilio)
Brandon has been working with AWS infrastructure for over five years, which is long enough to still have nightmares about EC2 Classic. While awake, he is the manager of Cloud Security at Twilio, where the challenge of real-time cloud communications requires thinking about security in new and exciting ways. While asleep, he is LEGO Batman.
Brandon's goal is to replace himself with a collection of equally poorly named micro-services and APIs; until Machine Learning has advanced enough to do that, but not so advanced as to become Skynet. While waiting for the future, you can find him teaching anyone who will listen that they can be a "security person" too.
There has been a long-standing debate around "multi-account" and if it's something worth pursuing as a strategy, a debate which is finally beginning to coalesce around the answer: "Yes"
I have some news for you, and it's the first thing I wish I would have known — your organization already has multiple accounts. You just might not know about them.
There are a variety of reasons this strategy is one to officially support and encourage. In this talk, I will cover each of them. More accounts can make your organization more secure, more resilient, grant better control over data and encryption keys which are subject to multiple and increasing compliance regimes, and more. All of these reasons are good ones from a security practitioners perspective, but none of the previous reasons are necessarily good enough to convince the rest of your organization. I wished someone had told me budgetary controls and growth planning were more convincing, if less exciting, arguments to push for a multi-account strategy.
I have made a number of mistakes in this journey, some of which I will cover in this talk, including the reasons why you need to adopt a multi-account strategy, how to convince others to join you, the benefits you get from having multiple accounts both security and not, and the technical concerns that will need to be addressed along the way to keep everything functioning. All of these things and more I wished someone had told me before embarking on my current multi-account journey. Now, I'm ready to share my experience with others so you don't have to make the same mistakes.