Speakers


Beyond the Big Three: Mastering Oracle Cloud Security in a Multi-Cloud World

Theme: Mapping the frontier: supporting new clouds and technology

Speaker

Dani Kaganovitch

Dani Kaganovitch is a Product Manager at RockSteady, a stealth cloud security startup. Before that, Dani worked at Google Cloud and Oracle Cloud, helping customers navigate various cloud use cases at scale in areas of core infrastructure workloads, FinOps, and observability. Through working with hundreds of organizations of different sizes, Dani organized and presented technical workshops at conferences, which led to becoming an advocate for effectively and efficiently solving real-world multi-cloud security challenges. Now, Dani focuses on ensuring customers’ environments are secure by design through the application of security policies that are practical, enforceable, and don’t break production.

Abstract

Oracle Cloud Infrastructure (OCI) has its own approach to security and policy enforcement, which differs significantly from AWS, Azure, and GCP. While organizations moving to OCI expect familiar security controls, OCI Governance Rules, IAM Policies, and Security Zones operate differently from AWS SCPs, RCPs, and Declarative Policies, Azure Policies, and GCP Org Policies. This session will break down what security practitioners need to know about OCI’s security model, how its enforcement mechanisms compare to other cloud providers, and example use-cases for integrating OCI into a multi-cloud security strategy.


Breaking AI Agents: Exploiting Managed Prompt Templates to Take Over Amazon Bedrock Agents

Theme: Mapping the frontier: supporting new clouds and technology

Speakers

Jay Chen

Jay Chen is a Security Researcher at Palo Alto Networks, specializing in cloud and AI security. His work involves identifying vulnerabilities, design flaws, and adversarial tactics in cloud-native technologies. Recently, he has shifted focus to GenAI security, researching threats to AI systems and adversarial uses of AI. Previously, Jay researched mobile cloud security and distributed storage security. He has published over 30 academic and industrial papers.

Royce Lu

Royce Lu is a security researcher at Palo Alto Networks. He has published research at top international security conferences, including BlackHat and Virus Bulletin. Currently, his interest is in LLM safety, covering areas such as LLM agent security, jailbreak automation, and handling LLM I/O security. Before GenAI security, Royce conducted research in network security. At the start of his career, he focused on malware and computer security.

Abstract

AI agents are rapidly transforming industries through autonomous planning, decision-making, and interaction with external environments. As cloud providers accelerate the deployment of services that simplify building these AI-driven applications, the security implications of this emerging technology remain largely unexplored. This talk reveals concerning security issues discovered within AWS Bedrock Agents—demonstrating how attackers can exploit prompt injection and misuse integrated tools to compromise these agents. Specifically, our research uncovers techniques that lead to information leakage, agent hijacking, unauthorized tool execution, and manipulation of persistent agent memory. The issues originate from AI models’ inherent probabilistic nature combined with inadequately secured prompt instructions, which attackers exploit to subvert internal planning and decision-making processes. Although our research primarily examines AWS Bedrock Agents, the issues and attack techniques discussed extend broadly across similar agent frameworks. We will share our methodology, key findings, mitigation strategies, and highlight important open research questions. Our goal is to foster proactive dialogue among cloud security researchers, practitioners, and AI developers to address these emerging security challenges collaboratively.


Bypassing AI Security Controls with Prompt Formatting

Theme: Surveying the wilderness: attacks and vulnerabilities, defensive practices

Speaker

Nathan Kirk

My name is Nathan Kirk, and I’m a Director at NR Labs (https://nrlabs.com/), a cybersecurity consulting startup. I have over a decade of experience with penetration testing, mostly focused on hardware and web applications. Before NR Labs, I was a Senior Consultant at Mandiant working with their Offensive Services division, and a Director at Hilton, where I helped build their penetration testing and Bug Bounty programs.

Abstract

In this talk we will present the prompt formatting technique, which we used to reliably bypass the Sensitive Information Filter functionality within Bedrock Guardrails, a service used to secure AI systems in AWS. Sensitive Information Filters are used by Guardrails to prevent Bedrock AI systems from returning sensitive information to users, such as Names and Email Addresses. By instructing the AI model to return data using programmatic, SQL-like queries, the returned data was modified sufficiently to bypass this security control, similar to WAF evasion. We have also developed a system prompt to help AWS customers mitigate this bypass, which we will discuss during the talk.


Challenges around AI-as-a-Service logging

Theme: Mapping the frontier: supporting new clouds and technology

Speaker

Jeremy Snyder

Jeremy is the founder and CEO of FireTail.io, an end-to-end API security startup. Prior to FireTail, Jeremy worked in M&A at Rapid7, a global cyber leader, where he worked on the acquisitions of 3 companies during the pandemic. Jeremy previously led sales at DivvyCloud, one of the earliest cloud security posture management companies, and also led AWS sales in southeast Asia. Jeremy started his career with 13 years in cyber and IT operations. Jeremy has an MBA from Mason, a BA in computational linguistics from UNC, and has completed additional studies in Finland at Aalto University. Jeremy speaks 5 languages and has lived in 5 countries.

Abstract

Like other PaaS offerings built from third party offerings on the CSPs, the current wave of LLMs comes with its own set of logging and observability challenges. We’ll explore some, as well as share some learnings from how to tackle this for both observability and detection and response purposes.


Challenges implementing egress controls in a large AWS environment

Theme: Packing your gear: tools for operating safely

Speaker

Greg Aumann

Greg holds a degree in Electrical and Computer Systems Engineering from Monash University, Australia. He has a diverse background spanning telecommunications, software engineering, infrastructure, and cloud security. Over the past decade, he has focused exclusively on AWS, with the last seven years dedicated to cloud security. Greg is currently employed at Block.

Abstract

Network egress controls are a well recognised technique to defend against exfiltration of sensitive data and malware/attackers using command and control channels. AWS has managed services for this: Network Firewall, DNS Firewall, and VPC endpoint policies. I implemented egress controls at scale using these services and encountered many implementation challenges.

This presentation addresses these challenges including service limitations, techniques for evading the controls, and unexpected issues presented by several services. You’ll learn what security can or cannot be provided if done correctly and how to successfully approach a large scale implementation.


Closing

Theme: Odds & Ends

Speaker

Abstract

Conference wrap-up


Data Perimeter Implementation Strategies: It is one thing to know how to configure SCPs/RCPs, and another for your organization to implement them

Theme: Mapping the frontier: supporting new clouds and technology

Speakers

Agnel Amodia

I’m Agnel Amodia, a Senior Technical Lead at Vanguard Group, specializing in Identity and Access Management. With over 15 years of experience, including 7 years in cloud security, I design enterprise-grade security systems for AWS cloud databases. Previously, I worked as a system programmer and researcher in India, building Neural Network Machine Learning-based software for the National Crime Records Bureau. I’m also a passionate security researcher who loves finding loopholes and crafting solutions. For me, security isn’t just work — it’s a passion I truly enjoy.

Ben Joyce

I’m Ben Joyce, IAM Cloud Leader at Vanguard Group, with about 20 years’ experience in platform engineering, operations and cloud security. My focus is building secure, scalable cloud environments that enable innovation while ensuring compliance in highly regulated industries. I work with engineering teams to design IAM strategies that balance security and usability. I’m passionate about solving real-world IT and FinTech challenges — from securing multi-cloud setups to streamlining security processes. Cloud security should enable the business, not block it, and I love building solutions that make security seamless for developers

Abstract

AWS IAM is getting more and more complex—permissions policies, permission boundaries, session policies, resource-based policies, service control policies, and now the latest buzz: Resource Control Policies (RCPs). Defining security boundaries on paper? That’s the easy part. But rolling them out across hundreds of AWS accounts running critical financial applications—that’s where things get tricky.

At Vanguard, we found a way to keep security tight without slowing things down. Instead of being the impeding team, we focused on making cloud security an enabler, not a blocker. In this talk, we’ll share how we built and deployed SCPs and Resource Control Policies (RCPs) to set security boundaries at scale—without causing downtime for business applications.

While implementing data perimeter controls with layered strategy, we ran into some real-world challenges. Challenges such as Dynamic VPC IDs and corporate CIDR made it tough to keep SCPs up to date, Resource Control Policy does not support global condition key for S3 bucket service, integrating defense-in-depth CI/CD pipeline controls with data perimeter controls and protect identities/resources from being tagged from aws console. Finally, verifying the effectiveness of these controls was non-trivial because of inconsistent access denied patterns.


Defenders hate it! Compromise vulnerable SaaS applications with this one weird trick

Theme: Surveying the wilderness: attacks and vulnerabilities, defensive practices

Speaker

Eric Woodruff

Throughout his 25-year career in the IT field, Eric has sought out and held a diverse range of roles. Currently the Chief Identity Architect for Semperis; Eric previously was a member of the Security Research and Product teams. Prior to Semperis, Eric worked as a Security and Identity Architect at Microsoft partners, spent time working at Microsoft as a Sr. Premier Field Engineer, and spent almost 15 years in the public sector, with 10 of them as a technical manager.

Eric is a Microsoft MVP for security, recognized for his expertise in the Microsoft identity ecosystem. His security research has also been recognized by Microsoft, most notably for his findings he dubbed “UnOAuthorized”. Eric is a strong proponent of knowledge sharing and spends a good deal of time sharing his insights and expertise at conferences as well as through blogging. Eric further supports the professional security and identity community as an IDPro member, working as part of the IDPro Body of Knowledge committee.

Abstract

In June 2023, Descope published research on nOAuth, a critical OpenID Connect implementation flaw that enables user account takeover in vulnerable applications. Following the disclosure, Microsoft and the Microsoft Security Response Center (MSRC) published articles on this issue, highlighting common anti-patterns and their follow-up actions with impacted application owners.

Fast forward to the fall of 2024, and nOAuth remains an active security threat. In this session, we will explore its persistence, unveiling new research that builds upon Descope’s original findings to identify additional implementation flaw patterns and methods for staging the abuse. We will also discuss how we uncovered vulnerable applications, the varying responses from developers, and what this means for securing modern SaaS applications.

Attendees will leave with a deeper understanding of how nOAuth attacks work, real-world examples of its exploitation, and actionable strategies to mitigate this critical risk.


Detecting the Undetectable: Threat Hunting in Appliance Environments

Theme: Surveying the wilderness: attacks and vulnerabilities, defensive practices

Speakers

Sagi Tzadik

Sagi Tzadik is a security researcher on the Wiz Research team. His expertise lies in identifying and exploiting vulnerabilities in web applications, as well as in network security and protocols. He has been recognized for his work and was featured on the MSRC Top Security Researcher Leaderboard.

Shahar Dorfman

Shahar is a threat hunting researcher at Wiz, where she focuses on identifying and analyzing emerging cyber threats to enhance security defenses.

Abstract

Malicious actors often exploit persistent threats to maintain long-term access to target systems by leveraging vulnerabilities and common misconfigurations. This is especially problematic in environments like appliances, where legitimate administrators may not have direct access to the file system, making detection and remediation even more difficult. In this session, we will walk you through our approach, which leverages a significant advantage of cloud environments: the ability to collect metadata at scale from a diverse range of products, including appliances. We will examine two real-life case studies where we used this technique, along with extensive metadata analysis, to uncover previously undetected threats. Join us in this session to learn how we’ve enhanced security through metadata analysis and improved detection, and to explore how we can collaborate to strengthen defenses across harder-to-monitor systems like appliances.


Double Agents: Exposing Hidden Threats in AI Agent Platforms

Theme: Surveying the wilderness: attacks and vulnerabilities, defensive practices

Speakers

Michael Katchinskiy

Michael Katchinskiy is a Security Researcher at Microsoft Defender for Cloud. His work focuses on researching and analyzing new attack vectors in cloud-native environments, specializing in Kubernetes and integrating CNAPP data to detect and prevent attacks.

Hagai Kestenberg

Hagai Kestenberg is a Security Researcher at Microsoft Defender for Cloud. His work focuses on AI and Kubernetes research in cloud-native environments.

Abstract

AI agents are everywhere, transforming business operations and driving innovation across industries. To accelerate adoption, cloud providers are rapidly developing agent-building platforms that simplify deployment and integration. However, their widespread adoption introduces significant security risks. In this session we will showcase the methodologies and techniques attackers use to compromise organizational AI agents, uncovering vulnerabilities that allow adversaries to bypass security controls and access organizations sensitive data. We will dissect these emerging threats and their impact on enterprise security. Finally, we offer actionable mitigation strategies and best practices to help organizations protect their AI-driven environments against these evolving threats.


ECS-cape - Hijacking IAM Privileges in Amazon ECS

Theme: Surveying the wilderness: attacks and vulnerabilities, defensive practices

Speaker

Naor Haziz

Naor Haziz is a security researcher and low-level developer at Sweet Security with over seven years of experience in vulnerability research, exploit development, and system internals. He holds a degree in Computer Science and previously served as an officer in the IDF Intelligence Corps, leading a team focused on Windows and Linux security. At Sweet Security, he develops the company’s security sensor, designing and implementing high-performance detection capabilities for cloud environments. His work combines low-level development and cloud security research to improve monitoring, threat detection, and defense mechanisms, ensuring robust protection for modern cloud infrastructures against evolving security threats.

Abstract

Hijacking Privileges in the Cloud: Breaking Role Boundaries in Amazon ECS

Modern cloud environments rely on fine-grained identity and access management (IAM) to enforce security boundaries. But what happens when those boundaries break? In our research, we uncovered a vulnerability in an undocumented Amazon ECS protocol that allows a low-privileged role running on an EC2 instance to hijack the IAM privileges of higher-privileged containers on the same machine.

This talk will explore the technical details of this attack and how it exploits shared infrastructure in containerized environments. In addition, we will provide best practices on avoiding role co-location risks, ensuring that high-privilege tasks are never deployed alongside low-privilege workloads in ways that could allow privilege hijacking.


Farewell False Positives: Building Trustworthy AI for IaC Analysis

Theme: Packing your gear: tools for operating safely

Speaker

Emily Choi-Greene

Emily has spent her whole career building and securing services on AWS (using Cloudformation, CDK, and Terraform). She started at Amazon Alexa, led data security & privacy at Moveworks, and is now the CEO & co-founder of Clearly AI, a YC-backed startup automating security and privacy reviews.

Abstract

While AI can dramatically accelerate security review of IaC, unchecked hallucinations render many solutions worse than useless. This talk demonstrates practical techniques for building trustworthy AI systems that can reliably analyze Terraform and CDK code for misconfigurations and vulnerabilities. Through live demonstrations of hallucination detection, output validation, and claim verification pipelines, attendees will learn how to build safeguards to use AI as a dependable cloud security tool. We’ll examine where the “hallucination hotspots” lie, how to leverage open-source libraries and prompting to prevent them, and how to generate actionable remediation plans that actually work in real-world cloud environments.


Happy Little Clouds: Painting Pictures with Microsoft Cloud and Identity Data

Theme: Surveying the wilderness: attacks and vulnerabilities, defensive practices

Speaker

Matt Graeber

Matt is a threat researcher focused on detecting Microsoft cloud and identity threats. Coining the term and establishing the strategy of “living off the land” in 2013 along with Chris Campbell, he has an extensive history of identifying ways to abuse native functionality in Microsoft products. Matt is dedicated to helping make defense accessible to all.

Abstract

You’re tasked with detecting an Entra ID, Azure or Microsoft 365 attack technique. Where do you start? How do you identify what data sources are available to observe the technique? Of the data sources available, what constitutes quality data with which a coherent story can be told? What are the elements of the story that needs to be told so that a responder can ask the right questions and respond with confidence? How data sources need to be correlated and can they even be directly correlated? What the heck is a SessionId versus a UniqueTokenIdentifier, how are they related, and why do they matter?

Anyone who has ever been tasked with developing detection guidance for cloud and identity threats in the Microsoft stack will know well just how fragmented and under-documeted their security data sources are. This session will attempt to bring sanity to how to tell effective stories when investigating and detecting threats based on a formal methodology for assessing the quality of any given data source. Join Cloudsec Bob Ross as he reveals the art and science behind threat storytelling and learn to distinguish malicious strokes from happy little accidents.


I Didn't Register for This: What's Really in Google's Artifact Registry?

Theme: Surveying the wilderness: attacks and vulnerabilities, defensive practices

Speaker

Moshe Bernstein

Moshe is a Senior Security Researcher specializing in cloud vulnerability research at Tenable Cloud Security. With nearly a decade of experience in cybersecurity, Moshe has developed a strong focus on network and operational security, web vulnerability research, and cloud infrastructure security.

Abstract

We scanned all of the Google-owned container images you might be using on the Artifact Registry for vulnerabilities and secrets. You probably won’t like what we found.


I SPy: Rethinking Entra ID research for new paths to Global Admin

Theme: Surveying the wilderness: attacks and vulnerabilities, defensive practices

Speaker

Katie Knowles

Katie Knowles is a Security Researcher at Datadog, focused on Azure research. Through her past roles, Katie has had the chance to approach security as both an attacker and defender, from incident response and detection engineering to penetration testing. She holds Azure (AZ-104, AZ-500) and offensive security (OSCP, GPEN) certifications.

Abstract

Backdooring Microsoft’s applications is far from over. Adding service principal (SP) credentials to these apps to escalate privileges and obfuscate activities has been seen in nation-state attacks, and led to the development of new security controls. Despite these efforts, we uncovered a vulnerable, built-in SP that could have allowed escalation from Application Administrator to any hybrid tenant user (including Global Admin).

Join us for an overview of SPs, app registrations, and the history of backdoor credentials on these identities. This talk will illustrate how building on existing SP research led to a new vulnerability, and cover controls that can help mitigate similar risks. Finally, we’ll identify leads for future SP investigations, and how you can use past research to seek your own vulnerabilities.


IAM Roles Anywhere - now for everyone with Let's Encrypt

Theme: Mapping the frontier: supporting new clouds and technology

Speaker

Dhruv AHUJA

Dhruv is a former SRE and founded Chaser Systems in 2020. He’s mostly Wiresharking, tinkering with PKI or tuning stacks as he once did in the low-latency world of financial data, only this time for network security. He is also a Rust programmer, cares deeply about developer experience, dabbles in cryptography and holds a master’s degree in Advanced Software Engineering from King’s College London. He’s always 5 years of practice away from being able to play Chopin on the piano - an accomplishment that will surely coincide with IPv6 overtaking IPv4.

Abstract

This talk will explore a lesser-known technique for deploying IAM Roles Anywhere on platforms without a key management service or secret storage, safely.

An impediment to the adoption of IAMRA is the absence of an existing PKI solution, or the expense and expertise needed to run a Private CA. Therefore, we will look at integrating Route 53 with an ACME-enabled PKI, such as Let’s Encrypt, for device enrollment with autonomous short-lived certificate issuance.

Come along for a deep dive into:

(1) Configuring IAMRA with targeted CA certificates. (2) Certificate Attribute Mappings for client authentication. (3) The corresponding Trust Policy on a Role. (4) Extending AWS SDK via their credential helper so temporary session credentials are transparently returned to the calling process.

We will also build detection for abuse of private keys from logs in CloudTrail, should they leak.

For contrast, using a hardware-backed private key store, such as Yubikey, with an ACME-enabled PKI will also be demonstrated.


Inside Microsoft's Battle Against Cloud-Enabled Deepfake Threats

Theme: Surveying the wilderness: attacks and vulnerabilities, defensive practices

Speakers

Alessandro Brucato

Alessandro is a senior Threat Research Engineer at Sysdig, working on cloud security. His research mainly focuses on cloud threats and supply chain attacks. In addition to research, he’s keen on bug bounty programs and has received rewards from several large companies. Alessandro is also a contributor to Stratus Red Team, a tool to emulate offensive attack techniques in the cloud, and Falco, a graduated CNCF project.

Stefano Chierici

Stefano Chierici is a Threat Research Manager at Sysdig, where his research focuses on defending containerized and cloud environments from attacks ranging from web to kernel. Stefano is one of the Falco contributors to a graduated CNCF project. He studied cyber security in Italy, and before joining Sysdig, he was a pentester. He obtained the OSCP Certification in 2019. He was a security engineer and a red team member.

Abstract

In December 2024, Microsoft’s Digital Crimes Unit (DCU) took legal action against LLMjacking threat actors, who developed tools designed to bypass the guardrails of generative AI services to create offensive and harmful content. Specifically, Microsoft’s legal complaint addresses the unlawful generation of harmful images using Microsoft’s Azure OpenAI Service. AI-generated deepfakes are realistic, easy to make, and increasingly used for fraud, abuse, and manipulation. This poses a threat to political elections, consumers of online services at risk for fraud, and the online safety of women and children. The involved threat actors built a sophisticated scheme to abuse Cloud AI services of compromised accounts and then sell access to end-users for a wide range of illicit activities, including deepfakes. As a matter of fact, LLMjacking made deepfakes a cloud infrastructure threat.

During the talk, we go through the technical aspects of the operation carried out by the cybercriminal group Storm-2139, a global network of creators, providers and end-users. Attendees will be equipped with practical knowledge to better protect their organizations from this evolving threat in the cloud landscape.


Introducing GRC Engineering: A New Era of AWS Compliance

Theme: Mapping the frontier: supporting new clouds and technology

Speaker

AJ Yawn

AJ Yawn is an experienced cybersecurity leader specializing in cloud compliance, governance, risk, and compliance (GRC) engineering, with nearly 15 years of experience. AJ currently serves as Director of GRC Engineering at Aquia, leading innovative approaches to compliance automation and cloud security. He previously founded ByteChek, a compliance automation startup focused on SOC 2 and HIPAA, achieving over $1M in annual recurring revenue. AJ also served as a partner at Armanino LLP, a top 20 CPA Firm, spearheading product innovation in compliance and audit automation.

As a dedicated educator, AJ instructs courses on cloud compliance and security automation for the SANS Institute and LinkedIn Learning, where he has educated over 125,000 professionals worldwide. AJ began his career as a U.S. Army Officer in the Signal Corps, earning the rank of Captain, and later grew the cloud compliance practice at Coalfire from a small team into a thriving practice. His professional mission remains focused on transforming compliance into an accessible, automated, and value-driven discipline.

Abstract

Traditional cloud compliance often relies on manual, checklist-driven processes that struggle to keep pace with modern cloud infrastructure’s complexity and agility. This session introduces GRC Engineering, a fresh, proactive approach that integrates Governance, Risk, and Compliance (GRC) principles directly into the AWS engineering lifecycle.

Attendees will explore how GRC Engineering leverages automation, infrastructure as code, and AWS-native tools to transform compliance from a reactive burden into a strategic asset. Real-world examples will demonstrate tactical methods for embedding compliance seamlessly into AWS environments, using services such as AWS Config, AWS Audit Manager, and automation frameworks.

Participants will walk away equipped with actionable insights and strategies for adopting GRC Engineering practices, streamlining compliance processes, reducing operational risk, and achieving continuous compliance in AWS environments.


Introduction

Theme: Odds & Ends

Speaker

Abstract

An introduction to fwd:cloudsec North America 2025


Inviter Threat: Managing Security in a new Cloud Deployment Model

Theme: Packing your gear: tools for operating safely

Speaker

Meg Ashby

Meg does cloud security for Alloy, a fintech in NYC. Previous to Alloy she worked at Marcus by Goldman Sachs, but that was way less fun. At Alloy, Meg does IAM, networking, data, and kubernetes security (and everything else related or tangentially-related to AWS & security). When detached from her computer, Meg dances and is part of a ballet performance group.

Abstract

Vendors are looking for ways to differentiate themselves in a crowded market and organizations are looking for solutions that are cheaper, faster, and easier for their teams to deploy and manage. SaaS providers are now offering a “vendor-managed-deployment” option for their product, where the employees of the SaaS company install the cloud infrastructure and software into your environment and maintain this access for ongoing maintenance. This can be enticing on both sides - enabling the vendor to focus on core product development rather than secondary “features” (including deployment templates) and freeing infrastructure teams from re-architecting and managing another tool in your stack.

However, the risks introduced in this new paradigm are immediately clear - expanded cloud attack surface, granting elevated access to another entity, and redefining your posture on insider threat are just the beginning. Yet, for some organizations the tradeoff in control is well worth the operational and cost savings proposed by this model.

In this talk we’ll cover how this new deployment option differs from existing well-established integration patterns and scenarios where this deployment option can benefit your organization. Additionally, we will provide key considerations to keep in mind when considering this deployment option, and strategies for mitigating risk and maintaining security in both initial deployment stages and ongoing support.


Keeping your cloud environments secure during a merger or acquisition

Theme: Packing your gear: tools for operating safely

Speaker

Isaac Lepow

Isaac is a security engineer with a background in a variety of areas of security, including cloud security, automation, threat intelligence, and anti-phishing. He has worked for Proofpoint and Capital One in various security roles.

Abstract

Acquiring another company can be hard. Acquiring another company with an existing cloud environment can be even harder.

The organization you are acquiring will almost certainly be doing some things differently than yours in the cloud. Their cloud environment could be less mature than yours (or more mature for that matter). Best practices can change over time. Other factors that are not specific to your cloud environment can still impact it. All of these things can introduce new security risks to your cloud environment, and some of them in ways you may not expect.

In this talk, we will discuss some of the possible complicating factors when migrating another organization’s cloud environment to your own, and strategies for mitigating them.


Logs don't mean a thing: Unraveling IaC-Managed Identity Ownership

Theme: Packing your gear: tools for operating safely

Speakers

Dan Abramov

Dan Abramov is a security researcher at Token, specializing in Non-Human Identity (NHI) security. With a rich background in both offensive and defensive cybersecurity, Dan spent five years in Unit 8200. Following his service, he worked for two years at Mitiga as an incident responder, focusing on Cloud native attacks and defense mechanisms. Dan plays the piano and Saxophone, is a great dancer and loves any kind of sports.

Eliav Livneh

Eliav Livneh is a cybersecurity expert with over twelve years of defensive and offensive security experience. He is a founding researcher at Token, specializing in identity security. Prior to Token, Livneh spent five years in the elite 8200 unit of the Israel Defense Forces’ Intelligence Corps, and four years as a founding researcher at Hunters, focusing on AWS threat detection and response. Livneh has a piano cover channel on YouTube, enjoys cycling, and is a geoscience enthusiast.

Abstract

Knowing who are the owners of identities is crucial for proper identity management and incident response. However, As IAM is increasingly being managed in infrastructure-as-code frameworks, it is becoming harder to answer questions of identity ownership. Platform audit logs (e.g. CloudTrail, Entra ID audit logs) are no longer enough to identify who were the human users that created or managed specific identities.

In this talk, we will share our experience in tackling the challenge of unraveling IaC-based ownership, utilizing data sources such as IaC codebases and CI/CD logs, using static code analysis, heuristics and LLMs.


No IP, No Problem: Exfiltrating Data Behind IAP

Theme: Surveying the wilderness: attacks and vulnerabilities, defensive practices

Speaker

Ariel Kalman

Ariel Kalman is a cloud security researcher based in Israel, actively engaged in cloud-related security research at Mitiga. With a specialization in application security, Ariel excels in discovering new attack vectors associated to cloud environment

Abstract

Google Cloud’s Identity-Aware Proxy (IAP) is often seen as the final gatekeeper for internal GCP services - but what happens when that gate quietly swings open? This session uncovers how subtle misconfigurations in IAP can lead to serious data exposure, even in environments with no public IPs, strict VPC Service Controls, and hardened perimeters. We’ll introduce a new vulnerability in IAP that enables data exfiltration, allowing attackers to bypass traditional network controls entirely, without ever sending traffic to the public internet. In addition, we’ll walk through real-world examples of overly permissive IAM bindings, misplaced trust in user-supplied headers, and overlooked endpoints that quietly expand the attack surface. Attendees will gain a deeper understanding of IAP’s internal workings, practical detection strategies, and a critical perspective on trust boundaries in GCP.


Not So Secret: The Hidden Risks of GitHub Actions Secrets

Theme: Packing your gear: tools for operating safely

Speaker

Amiran Alavidze

Amiran is a passionate product security professional with over 20 years of experience spanning systems engineering, security operations, GRC, and product and application security. As a security engineering leader, he champions a pragmatic, scalable approach to security - where collaboration between security, developer, and platform teams turns security into a business enabler rather than a bottleneck.

With a deep understanding of evolving cloud architectures and modern development practices, Amiran focuses on helping organizations align security with velocity, ensuring defenses scale effectively in dynamic environments.

An avid supporter of the local security community, he is actively involved with the OWASP Vancouver chapter and DC604 DEFCON group.

Abstract

If your CI/CD pipelines are built on GitHub Actions, you might be using GitHub Actions secrets to securely store credentials for connecting to your cloud environments. The security model for GitHub Actions secrets is not very intuitive. Many organizations assume that repository and organization-level secrets offer sufficient protection, but in reality these secrets lack granular access controls, exposing organizations to hidden security risks.

In this talk, we’ll break down the different types of secrets in GitHub Actions (organization, repository, and environment), the protections they offer, and their limitations. We’ll explore how misconfigurations lead to a false sense of security and discuss a more robust approach using environments and environment protection rules. We’ll also examine OpenID Connect (OIDC) for cloud authentication - where there are no long-lived secrets - but where misconfigurations can still introduce risks, and how environment-based protections help.

You’ll leave with a clearer understanding of GitHub Actions secrets, their exposure risks, and practical strategies to better protect cloud permissions of your CI/CD pipelines. Whether you’re securing sensitive credentials or refining your OIDC configurations, this session will equip you with actionable defenses to keep your automation secure at scale.


Patience brings prey: lessons learned from a year of threat hunting in the cloud

Theme: Surveying the wilderness: attacks and vulnerabilities, defensive practices

Speakers

Greg Foss

Greg Foss is a seasoned cybersecurity leader with over 15 years of experience spanning threat research, security operations, and offensive security. As the Engineering Manager of Threat Detection Engineering at Datadog, he leads a team of elite threat hunters and detection engineers, developing cutting-edge defenses against sophisticated cloud-native intrusions by nation-state and criminally motivated adversaries. His team transforms deep research and intelligence into actionable security insights, strengthening Datadog’s security platform.

Anthony Randazzo

Anthony Randazzo leads the detection engineering function at Datadog on their cloud security platform. He has nearly 20 years of experience in security operations roles across SecOps management, detection engineering, incident response, and threat intelligence. He’s been particularly focused on cloud-native threat management across these newer attack surfaces the past 6 years.

Abstract

Although AWS has been around for over 15 years, cloud threat hunting remains a relatively nascent discipline. While opportunistic threats like cryptocurrency mining are well-known, large-scale, cascading attacks targeting cloud-native infrastructure are less frequently discussed.

Over the past 18 months, we’ve significantly expanded our cloud threat hunting operations using vendor-agnostic strategies to better understand these emerging threats. This talk will outline our unique approach, which combines hypothesis-driven investigations, TTP-based hunts, and anomaly detection to proactively uncover threats at scale. We’ll also highlight our experiments with broader, cross-functional hunt operations that extend beyond our core team.

Attendees will gain insights from our large-scale cloud attack surface analysis and walk away with a deeper understanding of the evolving cloud-native threat landscape.


Putting Workload Identity to Work: Taking SPIFFE past day 0

Theme: Packing your gear: tools for operating safely

Speaker

Dave Sudia

Dave Sudia went from Platform Engineering to Product Engineering; in both roles he has had to stand up infrastructure in repeatable but constantly evolving architectures, taking into account usability, security, and scalability. He is the world’s biggest fan of Infrastructure-as-Code. By day you’ll find him enabling developers to do their best work and by night you’ll find him hanging with his kid, whose hobbies are now Dave’s hobbies.

Abstract

With the rise in popularity of open-source standards and tools like SPIFFE and SPIRE, it’s never been easier to get off the ground with issuing all your workloads a flexible cryptographic identity.

But this is just the start of your workload identity journey! The real challenge begins in putting these identities to work in your infrastructure in replacing legacy authentication mechanisms such as long-lived shared secrets. It’s difficult to know where to get started.

This talk will:

  • Briefly outline SPIFFE and Workload Identity
  • Explore the options for using SPIFFE for authentication and authorization, with a focus on techniques appropriate for existing infrastructure
  • Dive into a handful of practical examples of introducing SPIFFE-based authentication between legacy services, and, between legacy services and Cloud APIs
  • Describe higher-level strategies for rolling out workload identity in an organization, based on experience helping large organizations approach this work


Read Between The Logs: A New Vulnerability in Gemini Cloud Assist Proves the Threat is Real

Theme: Surveying the wilderness: attacks and vulnerabilities, defensive practices

Speaker

Liv Matan

Liv is a Senior Security Researcher at Tenable, specializing in cloud, application and web security. As a bug bounty hunter, Liv has found vulnerabilities in popular software platforms, including Azure, Google Cloud, AWS, Facebook and GitLab. Liv was recognized by Microsoft as a Most Valuable Security Researcher and ranked among the top eight Google Vulnerability Researchers for 2024. He has also presented at conferences including Black Hat USA, DEF CON Cloud Village, SecTor, Bsides LV, fwd:cloudsec and INTENT. You can follow Liv on X @terminatorLM.

Abstract

What if I told you that AI tools your defenders use on a regular basis potentially exposes your cloud environment to risk?

With a single wrong click by a defender attempting to analyze a log using Google Cloud’s promising new AI summarization feature, or with one innocent prompt, the defender could quickly become the victim of a phishing attack—or worse, suffer sensitive resource data exfiltration.

This talk will showcase a vulnerability I discovered in Google’s new flagship AI assistance tool, Gemini Cloud Assist, where attackers can embed malicious prompts into a victim’s logs. When the user reviews these logs with Gemini Cloud Assist, the attacker may exploit Gemini’s integrations or deceive the user into accessing a legitimate-looking phishing link. This vulnerability discovery reveals a new attack class in the cloud that defenders should be aware of. Additionally, I expanded my research to include a similar service in Azure, Azure Copilot, which does not yet seem to be mature enough to be susceptible to this attack class.

When diving into my research on both Google’s Gemini Cloud Assist and Azure Copilot, I will enable defenders to better understand the arising risks of using these emerging services. By the end of this talk, the audience will learn about new monitoring techniques for malicious prompts in the cloud they can apply in their environments, specifically focusing on the various log sources that we believe attackers will target in the future.


Rebuilding ROADRecon for the Modern Entra Environment

Theme: Packing your gear: tools for operating safely

Speaker

Thomas Byrne

Thomas is a security consultant and WithSecure consulting. He has experience in a range of areas including application, network and cloud security. He focuses his time mainly on Azure, DevOps and researching cloud specific vulnerabilities outside of work.

Abstract

In the ever-evolving landscape of cybersecurity, tools that help security professionals enumerate and understand their environments are invaluable. ROADRecon, an open-source tool designed to enumerate Azure AD (now Entra) environments, has been a staple for many. However, with the impending deprecation of the Azure AD Graph API, ROADRecon faces a significant challenge.

The session will begin with an introduction to security assessments in Azure, highlighting the challenges and the role of automated tooling, specifically ROADRecon. A particular challenge that we will explore is ensuring continued operation of tools that previously made use of the Azure AD Graph API and enhancing them with support for different APIs that can provide security professionals with an accurate view of the tested environment.

The core of the presentation will focus on the implementation of the Microsoft Graph API in ROADRecon, including the hurdles encountered and the solutions developed. This will involve an in-depth discussion on Entra’s implementation of OAuth, first-party applications, and pre-consented permissions, which are crucial for understanding how attackers can bypass security protections.

As we explore legitimate usage of Microsoft Graph, we will demonstrate how lesser-known APIs (e.g. Ibiza API) can be used to enhance reconnaissance capabilities and provide an equivalent method for fetching tenant information that would not be logged. Lastly, we’ll finish with an explanation of possible preventative and detective controls available to organizations to try and mitigate the usage of these APIs for malicious activities.

What Attendees Will Take Away:

  • Understanding of OAuth in Entra: Attendees will gain a foundational understanding of how OAuth is implemented in Entra and learn how first-party applications and the concept of pre-consented permissions can be used for offensive security purposes.

  • Transition from Azure AD Graph to Microsoft Graph: - We will explore the impact of Microsoft’s deprecation of Azure AD graph in favor of Microsoft Graph to both offensive and defensive security teams. There are crucial differences between the APIs that affect how threat actors much approach Azure estates now and how defenders can detect such attacks

  • Tool Enhancements: An introduction to the enhanced capabilities of the rebuilt ROADRecon tool, including its use of undocumented APIs like the Ibiza API.

  • Detection strategies: How defenders can detect modern security tooling such as ROADRecon, the challenges of this at scale and the possibility of detecting undocumented APIs.


Securing Remote MCP Servers

Theme: Mapping the frontier: supporting new clouds and technology

Speaker

Jake Berkowsky

Jake is a Principal Architect heading Snowflake’s Cybersecurity Data Cloud. At Snowflake, Jake’s mission is to evangelize and enable the implementation of modern security analytics and engineering. Prior to joining Snowflake, Jake has had a diverse background of technical and leadership roles having most recently served as Co-Founder and CTO of a Cloud Consulting and Data Intelligence company. He regularly maintains his experience and interests in the areas of cloud, devops and development and is an active outdoorsman and nature enthusiast.

Abstract

Once again what’s old is new. Its looking like MCP is gonna be here for awhile so its only a matter of time before an enabled developer asks for sign off on something that works great on their local.

This lightning talk is geared to provide cloud security professionals with an up to date understanding of best (or least bad) practices. We’ll cover:

  • Layers of uncertainty: With a spec in active development and not even a transport layer fully agreed upon, how do you approach deploying something before the recommended architectures are even decided on?
  • Trends: What are other folks doing? Is OAuth actually feasible? What else has been done? How are people working around limitations and what are the risks? What is SSE, why is it deprecated but still implemented everywhere and what do I do when they tell me it “needs websockets”?
  • Documented paths forward: What is the community doing? What standards have been released to help align? What tools and frameworks exist to make our jobs easier?
  • What could go wrong? A dive into cloud specific threat vectors, covering the theoretical and maybe even real world incidents


Securing organizations ML & LLMops deployments : A platform architects journey onboarding LLM & MLops tools and securing multi-cloud data access

Theme: Mapping the frontier: supporting new clouds and technology

Speakers

Kyler Middleton (she/her)

Kyler grew up in rural Western Nebraska, fixing neighboring farmers’ computers in exchange for brownies and Rice Krispies. Then she was going to be a librarian to help people find the information they need. Then she discovered computers were a real job, and more than just a fix for her munchies, and she’s now been a systems, network, call center, and security engineer, and is now a DevOps lead, and software engineer. She speaks at any conference that will have her, hosts Day Two DevOps podcast from Packet Pushers, and writes up cool projects with approachable language and pictures as part of her Lets Do DevOps site, with the intention to upskill anyone of any skill level. I have an insatiable curiosity and desire to help the folks around me succeed and grow. So - Lets Do DevOps.

Sai Gunaranjan

Sai Gunaranjan is an Enterprise Architect with hands-on experience in strategizing and designing technology systems and applications for cloud platforms. Passionate about leveraging technology to solve complex business problems, Gunaranjan is accountable for overall platform security and availability as a senior member of the cloud platform team at Veradigm. Gunaranjan resolves the complex security challenges of cloud services adoption by partnering across the business units to migrate applications to the cloud while ensuring security and availability are sustained

Abstract

AI and ML are rapidly becoming foundational technologies for enterprises, offering powerful enhancements to products and services. However, onboarding and securing MLops and LLMOps tools while enabling access to real-world data is particularly challenging, especially in highly regulated industries like healthcare.

This session will provide a practical, security-focused deep dive into reference architectures for securing ML and LLM workloads across multi-cloud environments. We will explore native security controls in AWS and Azure, access patterns for sensitive data, and best practices for protecting AI workloads from unauthorized models and insecure data pipelines.

Attendees will gain actionable insights into designing secure AI/ML environments while balancing performance and compliance needs, including real-world lessons learned from platform architects navigating this evolving landscape.


Shared-GPU Security Learnings from Fly.io

Theme: Mapping the frontier: supporting new clouds and technology

Speaker

Matthew Braun

Matthew Braun has over 20 years of experience operating and testing secure system across government, defense, and private industry sectors. In his current role as Director of Security at Fly.io, a public cloud provider, Matt’s responsibilities cover the entirety of Fly.io’s security program. Matt has been privileged to work with and learn from Very Smart People at Fly.io as well as in his previous role as a penetration tester at Matasano Security/NCC group. Matt has a Bachelors and a Masters in Computer Science, is a proud father of twins, attempts woodworking, is a runner and occasional sailor, and serves on the boards of two arts non-profits.

Abstract

In 2024 Fly.io made a big bet that developers would want access to cloud GPU compute resources. While that bet didn’t quite pay off, we spent a lot of time (and money) in finding a way to provide shared customer access to NVIDIA GPU hardware in a secure manner. When the work was done we had a much greater understanding of the risks presented by GPUs, as well as possible mitigations, that may be useful to anyone looking to provide GPU resources to customers. This lighting talk will include:

  • Technical details of the challenges faced in implementing secure GPU access, including why existing NVIDIA GPU virtualization technologies were unsuitable
  • An overview of the threats associated with offering shared or virtualized GPU access
  • A review the architecture of NVIDIA datacenter-grade GPUs, with focus on security-relevant subsystems
  • A dive into PCIe functionality, threats, and mitigations
  • The conclusions and recommendations from our security evaluations of the hardware and OS environments


Staying Sneaky in the Office (365)

Theme: Surveying the wilderness: attacks and vulnerabilities, defensive practices

Speaker

Christian Philipov

Chris is a principal security consultant and leads the specialist services within WithSecure Consulting. As part of his day to day he leads the global team that deals with various different types of engagements of both a transactional and more bespoke nature. Chris specialises in Microsoft Azure predominantly with GCP and AWS as an additional background.

Abstract

Microsoft are getting better at closing out security gaps in well-known APIs and components of their platform. However, as shown across the different cloud service providers, these interconnected systems almost always have a significant amount of complexity and a significant range of APIs that communicate together in various ways. Exploring these lesser-known APIs from an attacker and defender’s perspective allows us to better understand these complex attack surfaces and further defend cloud environments.

This talk will aim to further expand the rapidly developing field of exploring hidden APIs in Entra/Azure and will focus on the SharePoint APIs being used by the service through the browser client. We’ll explore ways of enumeration that are available through the SharePoint APIs that avoid the direct usage of Microsoft Graph and respectively allow an attacker to evade all known and possible methods of detection. The techniques that will be shown allow an attacker with a foothold in SharePoint to pivot and laterally move throughout an Azure environment, circumventing modern security controls and possibly allowing for the compromise of additional services, aiding an adversary to move towards their objectives. The talk will conclude with an exploration of file sharing security controls in the environment and whether they can be bypassed as well as provide an overview of what actions are available for defensive teams to prevent or detect attempts at using these APIs directly.

Attendees will gain an understanding of:

  • Microsoft SharePoint Online internals and differences to SharePoint related Microsoft Graph APIs
  • How an attacker with a foothold as a regular business user with access to SharePoint can bypass security controls within a tenant to access sensitive resources
  • What a security team can do to prevent and detect usage of these APIs within an organization


Taming LLMs to Detect Anomalies in Cloud Audit Logs

Theme: Mapping the frontier: supporting new clouds and technology

Speaker

Yigael Berger

Yigael Berger is a tech entrepreneur innovating in Cybersecurity and AI. Yigael is a veteran of 8200, the Israeli Cybersecurity and SigInt Agency. Yigael has co-founded VisibleRisk, a cybersecurity risk quantification startup funded by Moody’s, acquired by BitSight in 2021. Yigael holds a BSc and MSc in Computer Science from the Technion and Tel Aviv University. Paper published in ACM: Dictionary attacks using keyboard acoustic emanations. In Proceedings of the 13th ACM conference on Computer and communications security. 2024 Patent-pending invention with the title “CONTEXTUAL ANOMALY DETECTION IN CLOUD ACTIVITY LOGS”.

Abstract

Cloud audit logs generate massive volumes of data, making anomaly detection a complex and often error-prone challenge. Traditional systems frequently suffer from high false positive rates, overwhelming security teams and obscuring critical insights. In this talk I will explore an innovative approach for training an LLM on log data turning it into a powerful highly nuanced anomaly detection engine.

We will be releasing these components:

  1. The code for parsing log data, eg CloudTrail, etc.
  2. The code for training the LLM on the log data
  3. A lite web app for visualizing and investigating anomalies


The Duplicitous Nature of AWS Identity and Access Management (IAM)

Theme: Surveying the wilderness: attacks and vulnerabilities, defensive practices

Speaker

Jason Kao

Jason Kao is the founder of Fog Security and is passionate about cloud identity and access management and cloud data security.

His previous experience in cloud ranges from offensive cloud consulting at Praetorian, building cloud security out at a large financial firm, and running security research and solutions at CloudQuery. He’s the author on multiple security patents. Jason has previously given talks at AWS Re:Invent, AWS Re:Inforce, SANS CloudSecNext, Mandiant mWise, and more.

In his spare time, he likes to swim, test out new recipes in the kitchen, and dabbles in photography.

Abstract

Configuring AWS Identity and Access Management is typically seen as the customer’s responsibility for security. This is predicated on the “shared responsibility model” where security and compliance responsibility is shared between the cloud provider (AWS) and the customer.

We believe that the “shared responsibility model” comes with certain assumptions. We assume that the cloud provider provides clear instructions for how to use their tools and to configure infrastructure. Part of that assumption is that IAM actions and permissions are clear and unique. What would be the point if we block 1 IAM action only to find that there’s another we missed (similar to a game of whack-a-mole).

In this talk, we’ve go through increasingly potentially problematic examples of duplicitous IAM Permissions: permissions that effectively let us achieve the same goal. These examples include retrieving data, setting permissions (resource-based policies), and more. We’ll cover impact and how these leads to blind spots in security including our monitoring and alerting defenses, preventative issues, and more.


The False Sense of Security: Defense Becoming a Vulnerability

Theme: Surveying the wilderness: attacks and vulnerabilities, defensive practices

Speaker

Nathan Eades

I bring a decade of diverse experience in the IT industry. My career has included roles in software development, with the majority focused on cybersecurity encompassing threat detection, threat research, data loss prevention, endpoint security, networking, access controls, and more. For the past seven years, my primary focus has been the proactive identification of potential threats. I have honed my skills in developing sophisticated methods for detecting these threats, ensuring that defense mechanisms stay one step ahead of malicious actors. Today, that includes thoughtfully integrating AI to enhance and simplify intelligence and detection pipelines. I hold a B.S. in Computer Information Systems and an M.S. in Information Security from Robert Morris University.

Abstract

In the evolving landscape of identity security, Microsoft Entra ID’s Privileged Identity Management (PIM) stands as a cornerstone solution promising just-in-time (JIT) access and least privilege enforcement. However, beneath this security veneer lies a troubling reality that many organizations fail to recognize, or won’t admit. This session will peel back the layers of PIM and JIT implementation to reveal how this widely-adopted control has often created a false sense of security rather than meaningful protection.

Drawing from experience analyzing diverse customer environments, I’ll demonstrate how common PIM implementations can reduce security to a mere procedural formality - transforming “just-in-time” into “just-a-button” that sophisticated adversaries easily circumvent.

I’ll reveal a couple gaps in PIM and improvements that convert checkbox security into actual protection.


The Good, The Bad, and The Vulnerable: Breaking Down GCP Tenant Projects

Theme: Surveying the wilderness: attacks and vulnerabilities, defensive practices

Speakers

Ofir Balassiano

Ofir Balassiano leads AI and Cloud security posture research at Palo Alto Networks, uncovering critical vulnerabilities in GCP and Azure. With over a decade of experience in security, he has a proven track record of impactful research and innovative solutions. Prior to Palo Alto Networks, Ofir served as head of security at Dig Security, driving key security initiatives, and as a senior researcher at XM Cyber, where he specialized in Windows internals and EDR strategies. His career began in the IDF, where he led a team focused on advanced security technologies. His expertise spans cloud security, OS hardening, and penetration testing, with a unique ability to analyze and secure systems from both offensive and defensive angles. His work continually influences best practices in cloud security, keeping organizations ahead of emerging threats.

Ofir shaty

Ofir Shaty, a seasoned Senior Security Researcher at Palo Alto Networks, boasts an impressive 8-year track record in the realms of Data Security, Web Application and Cloud Security. With a specialized focus on researching cloud and database attacks, he has contributed groundbreaking research to the field, exploring both offensive and defensive strategies and attack techniques. Notably, Shaty demonstrated his expertise by disclosing vulnerability in multiple GCP services.

Abstract

Tenant Projects are the backbone of services in GCP, yet their architecture remains largely misunderstood- even by seasoned cloud security practitioners. This talk takes a deep dive into how GCP implements Tenant Projects, how permissions and interconnected services are structured, and where the cracks start to form.

As part of our research into Vertex AI, we uncovered vulnerabilities that not only compromised Vertex AI itself but also exposed fundamental weaknesses in the Tenant Project model. By understanding the permission model and service interactions, we were able to escalate our findings and take full control over an entire Tenant Project.

We’ll walk through the architecture, highlight the risks, and show real-world exploitation scenarios- unveiling for the first time additional vulnerabilities beyond our initial discoveries. This talk isn’t just about the bugs; it’s about how attackers can abuse the Tenant Project model and what security teams need to do to defend against it.

Expect a mix of deep technical content, hands-on exploitation, and a broader discussion on the implications of GCP’s multi-tenant architecture.


The Good, the Bad, and the Ugly: Hacking 3 CSPs with 1 Vulnerability

Theme: Surveying the wilderness: attacks and vulnerabilities, defensive practices

Speakers

Hillai Ben-Sasson

Hillai Ben-Sasson (@hillai) is a Security Researcher based in Israel. As part of the Wiz Research Team, Hillai specializes in research and exploitation of web applications, application security, and finding vulnerabilities in complex high-level systems. Hillai is a frequent speaker in security conferences and has been recognized in MSRC’s Most Valuable Researchers leaderboard.

Andres Riancho

Andres is an application and cloud security expert with deep expertise in offensive security and Internet-scale vulnerability research. He has worked extensively as an application security consultant, founded a security consultancy firm, and led web security initiatives as Director at Rapid7, where he contributed to advancing vulnerability scanning capabilities.

Over the years, he has authored multiple open-source tools focused on web and cloud security, which have been widely adopted by the security community and featured in international security conferences.

Currently, Andres is part of the Vulnerability Research team at Wiz, specializing in cloud security and large-scale vulnerability discovery.

Abstract

As AI workloads migrate to the cloud, Cloud Service Providers are rapidly evolving their GPU offerings. These multi-tenant environments are often built on NVIDIA Container Toolkit, the industry-standard framework for running GPU-based containerized apps. In this talk, we will show you how a single vulnerability in this fundamental framework impacted the entire CSP ecosystem - and how each environment handled a brand-new 0-day vulnerability.

We’ll walk through our discovery of a container escape vulnerability in this foundational layer of GPU infrastructure, and its real-life implications across 3 different cloud providers: Azure, DigitalOcean, and Replicate. Each case began with a standard customer workload running our exploit - but the outcomes varied widely. One led to minor impact; another with lateral movement that triggered blue teamers; and one resulted in complete service takeover.

Join us to gain a firsthand look on how major cloud providers build their environments, and the anatomy of a container escape vulnerability in the wild. Finally, learn how to build stronger guardrails in the cloud, by examining the flaws and misconfigurations we were able to exploit.


This Wasn't in the Job Description: Building a production-ready AWS environment from scratch

Theme: Packing your gear: tools for operating safely

Speakers

Mohit Gupta

Mohit is a Principal Security Consultant at Reversec, where he specialises in AWS, Kubernetes, CI/CD amongst other things. He has been working in security for around 10 years, helping a variety of clients across most sectors in that time. He has previously spoken at a variety of conferences such as fwd:CloudSec, DefCon Cloud Village, SteelCon, etc.

Nick Jones

Nick is the Global Head of Research at Reversec, where he focuses on AWS security and attack detection in advanced, cloud-native organisations. He has been delivering offensive security testing, consultancy and support to a world-wide client base (including some of the world’s largest financial organisations) for over a decade, and led WithSecure Consulting’s cloud security team for half of that time. Outside of work, Nick is on the organising committee for fwd:cloudsec Europe and also serves on the fwd:cloudsec Technical Oversight Committee and North America review board. He is also an AWS Community Builder, and has previously spoken at fwd:cloudsec, DEF CON Cloud Village, Disobey, T2, and several AWS User Groups and Community Days.

Abstract

WithSecure Consulting’s going independent, and with it came the need to create an entire new AWS estate from scratch. The catch? We’re not an engineering house and this isn’t our core focus area. It needed to be done quickly, with the resources we already had available, on the lowest budget possible. The end result? A bunch of penetration testers and security consultants finding themselves on the other side of the coin, engineering an environment to support and enable security consulting and research work, which invariably requires bending/breaking a lot of “security best practices”.

Join Mohit and Nick as they run through the build-out process and associated engineering decisions and tradeoffs, highlighting where we chose to deviate from the usual “best practices” and why. We’ll cover:

  • Authentication & Authorisation strategies
  • Organisation structure and hardening, workload segregation tradeoffs
  • Code and infrastructure deployment approaches across an incredibly disparate set of teams
  • Security monitoring on a budget

Attendees will walk away from this talk with battle-tested advice on how to design, build an operate an AWS estate on a limited budget with limited personnel, and understanding the trade-offs that were made to support some distinctly non-standard requirements.


Trust Issues: What Do All these JSON files actually mean?

Theme: Packing your gear: tools for operating safely

Speaker

David Kerber

Dave is an engineer and longtime AWS practitioner with a focus on IAM and AWS security tooling. He’s led product and engineering teams at startups and billion-dollar companies, raised millions from VCs, built two CSPMs, and now consults on AWS security for Fortune 500 companies. He maintains open-source projects in the AWS IAM space and is currently obsessed with perfecting his focaccia.

Abstract

As cloud security practitioners, we spend our days wrangling IAM policies—but for all the JSON we manage, it’s still surprisingly hard to answer basic questions like: “Who can access this S3 bucket?” or “What can this role actually do?” Understanding AWS permissions in practice means piecing together policies across services, accounts, organizations, and trust layers. And because those policies are often managed by different teams or scattered across pipelines, it’s difficult to reason about what’s truly possible in a deployed environment.

This talk explores a pragmatic approach to verifying effective IAM permissions: simulating what AWS IAM actually allows across all policy layers, and exposing the results in a way that clearly shows who can do what, and why. Rather than replacing pre-deploy linters or policy review processes, this system complements them by analyzing deployed IAM configuration and evaluating real-world access across identities, resources, and trust relationships. Want to know which principals have s3:GetObject access to your prod bucket? Or which external accounts can assume a sensitive role? We’ll show how to answer those questions—quickly, clearly, and without hand-parsing several JSON files.

You’ll leave with a new set of tools for understanding how IAM really works in your environment. This session includes a demo and the release of an open-source project built to support these workflows.


What Do You Mean, 'Resource Not Found?' Demystifying GCP Error Codes for IR & Detections

Theme: Packing your gear: tools for operating safely

Speaker

Gabriel \ Gavriel Fried

Gavriel Fried is a Principal Security Researcher at Mitiga. Prior to working at Mitiga, Gavriel’s history in the cyber security field includes various research positions such as UEBA, Deception, Network and DPI, Red Teaming, Digital Forensics and some Malware Analysis. Gavriel researches potential attacks and abuses on cloud services and SaaS

Abstract

Have you ever stared at a GCP audit log error—like “resource not found” or “permission denied”—and wondered what really went wrong or if there’s more to the story? In this session, we’ll unravel the often-overlooked world of GCP error codes and reveal how these cryptic messages can guide your incident response, sharpen detection rules, and even hint at possible reconnaissance attempts. Through practical examples, we’ll show how digging deeper into error objects can highlight missing identity details, reduce false positives, and strengthen your overall GCP security posture. If you’ve ever dismissed an “unhelpful” error, join us to learn why those logs might be more powerful than you think.


What would you ask a crystal ball for AWS IAM?

Theme: Packing your gear: tools for operating safely

Speaker

Nick Siow

Nick is a security professional who loves an engineering-first approach to all things cloudsec. He has experience ranging from wrangling the worlds largest cloud deployments to securing a single EC2 at nonprofits, and most things in between. He is currently a security software engineer at Netflix

Abstract

In AWS, Identity and Access Management (IAM) policies are the foundation of access control throughout the cloud. The complexity and expressiveness of these policies present significant challenges to cloud security professionals when it comes to modeling access and answering basic questions such as “who can access this resource?” or “what are the effects of this policy change?”

This presentation will walk practitioners through a three-part journey

  • Introducing new OSS building blocks which can remove the guesswork of writing IAM policies
  • Using these building blocks to uplevel several cloud security pillars
  • Frameworks to simplify and distill the nuance of cloud access into insights for builders and leaders at their own companies

This talk will include the release of above open source tooling to support and facilitate the approaches it presents.


When Your Partner Betrays You - Trusted Relationship Compromise In The Cloud

Theme: Surveying the wilderness: attacks and vulnerabilities, defensive practices

Speaker

Sebastian Walla

Sebastian Walla is an expert for Cloud Threat Intelligence. He is the deputy manager of the Emerging Threats team (focusing on Cloud) and built the Cloud Threat Intelligence mission at CrowdStrike. Since 5 years Sebastian worked as a reverse engineer and has been focusing on cloud intrusions for 3 years. Sebastian studied Cybersecurity, has a Masters in Computer Science, and published a paper on automatically identifying and exploiting tarpit vulnerabilities to fight malware. He further holds the GREM and GCLD certification and presented at Euro S&P 2019, Fal.Con 2023, fwd:cloudsec EU 2024, and BSides Bern 2024.

Abstract

Nation-state adversaries as well as occasionally eCrime actors have repeatedly leveraged trusted relationship or supply chain compromises in endpoint environments to achieve access to a large number of victims via compromising a single target and then moving laterally to downstream customers. While this initial access vector is largely known in the traditional threat landscape, there is only little open-source reporting except for COZY BEAR abusing trusted relationship compromises to obtain access to Entra ID environments. In this talk we will look at two incident response cases in which threat actors compromised a Microsoft Cloud Solution Provider and a SaaS Provider and used these providers’ access to move laterally to downstream customers to obtain access to emails in O365. We will discuss how to hunt for the observed techniques, mitigations, and discuss the shortcomings in defending against these kinds of attacks.


You Are Not Netflix: How to learn from conference talks

Theme: Forming a fellowship: organizations and community

Speaker

Rami McCarthy

Rami is an opinionated security wonk. He has helped build and scale security programs at companies like Figma and Cedar. Now, he strives to work on Security, for the Internet, at Wiz. His personal thoughts about security are over at ramimac.me.

Abstract

Conference talks and engineering blogs are often quilted from small omissions and half-truths. These include subtle white lies about collaboration, minimize of technical challenges, inflate outcomes, and omit critical details regarding risks, technical debt, and unresolved issues. This is part of the unspoken social contract in sharing sensitive internal information publicly.

The key is to read between the lines, spot the implicit, and still extract meaningful insights. This talk will provide you with a framework to navigate these nuances effectively.

We’ll explore what is often left unsaid, examine real-world examples, and equip you with the tools to make the most of fwd:cloudsec and similar events!


whoAMI: Discovering and exploiting a large-scale AMI name confusion attack

Theme: Surveying the wilderness: attacks and vulnerabilities, defensive practices

Speaker

Seth Art

Seth Art is currently a Security Researcher & Advocate at Datadog. Prior to joining Datadog, Seth created and led the Cloud Penetration Testing practice at Bishop Fox. He is the author of many open source tools including BadPods, IAMVulnerable, and CloudFoxable, and the co-creator of the popular cloud penetration testing tool, CloudFox.

Abstract

It’s not every day you stumble upon a technique that enables remote code execution (RCE) in thousands of AWS accounts at once—but that’s exactly what happened with the whoAMI attack. By researching a known misconfiguration through a new lens, we discovered how to gain access to thousands of AWS accounts that unknowingly use an insecure pattern when retrieving AMI IDs.

In this talk, I’ll walk you through how we uncovered the whoAMI attack, how we confirmed the attack works, and how we even identified vulnerable systems that were internal to AWS. We’ll explore the surprisingly diverse ways developers manage to shoot themselves in the foot by omitting the owners attribute, and share how difficult it was to build and refine detections for this anti-pattern that minimized false positives (and false negatives).

Finally, we’ll focus on how you can spot and fix this misconfiguration in your own environment, covering a range of defense-in-depth strategies for both prevention and detection. This is a roller-coaster tale of cloud security research—full of ups and downs and twists and turns. And like every roller coaster I’ve ever been on, it lasted longer than I expected- or wanted it to.