Where Code Shapes Power
Expect sharper edges. Heavier conversations. And people who don't confuse authority with understanding.
KEY THEMES
Nation-state tradecraft
AI and intelligence
Offensive and defensive asymmetry
Real-world constraints
Systems that matter at scale
Agenda
Registration and Exhibit Set Up + No Host Happy Hour
Doors open. Grab your badge, set up your exhibit space, and grab a drink. Let the networking begin.
Registration and Check In
Doors open. Grab your badge and coffee.
Underground Exhibits
Open all day. Think less trade show, more operator playground. The RBLN Underground brings together cutting-edge companies, interactive experiences, and the technology driving the next wave of disruption.
The Gauntlet
The Rebellion Gauntlet is a 24-hour, no-pause, no-excuses endurance competition built for AI engineers, hackers, operators, and builders who want to win. You'll be given a real-world security challenge and 24 hours to ship a working solution.
Opening Remarks
Welcome to RBLN — why we built it, who it’s for, and what makes this experience different.
Expert Witness War Stories: From the Witness Chair in High-Stakes Technology Battles
What does it feel like when your technical conclusions are challenged under oath, with billions of dollars and corporate reputations at stake? In this talk, Dr. Avi Rubin shares candid stories from more than two decades serving as a technology expert witness in complex litigation involving cybersecurity, software systems, AI, and emerging technologies. Through anonymized case studies and firsthand experiences, he reveals the intensity of cross-examination, the strategic maneuvering behind the scenes, and the pivotal moments that can shift the outcome of a case. For select matters, he takes a deep dive into the specific technologies at issue and unpacks the core technical and legal questions that shaped the dispute, while also sharing some of the unexpected, lighthearted moments that arise even in the most serious proceedings. This session offers a rare look at credibility, persuasion, and the human dynamics that often determine which arguments prevail in high-stakes technology disputes.
The CISO Reality Check: AI Risk Without the Filter
“Expect sharper edges” means we’re done pretending AI risk is neatly contained. • A clearer view of where AI is introducing unseen enterprise risk—and how leaders are surfacing it before it becomes material • How different industries are making tradeoffs between speed, innovation, and control—and what those decisions actually cost • Practical approaches to governance that go beyond policy decks and stand up in production environments • Lessons from the field on where security programs are quietly failing—and how they’re being rebuilt in real time If you’re looking for polished answers, this isn’t it. If you want to understand how CISOs are actually navigating AI risk when the stakes are real this is where the conversation starts.
Gall’s Law and the OODA Loop: First Principles for building in Age of AI
Attendees will leave with a technical and operational framework for designing scalable AI-native enterprise systems capable of supporting coordinated intelligence at organizational scale without sacrificing governance, resiliency, security, or architectural coherence.
Lunch and Networking
Lightning Talks
Quick, impactful presentations perfect for introducing new ideas, tools, or techniques.
The Exploitation Lifecycle: Exploring the Stages of Exploitation
This presentation explores attacker activity "left of boom" - from vulnerability discovery, to disclosure, to weaponization, to delivery, and finally to exploitation. At each stage of the exploitation lifecycle, a detailed look is taken at attacker activity and is counterposed with intelligence and operational activities that would help defenders to better protect their environments. The presentation concludes with a fresh way of thinking about the steps leading to successful exploitation including how intelligence derived from attacker activity at each stage in the exploitation lifecycle can be leveraged to paint a more complete picture of adversary capabilities in support of the strongest possible defense.
AI‑First Enterprise Architecture: Turning Pilots into Production
• How to recognize why your AI pilots are not scaling and which architectural gaps are to blame. • A concrete reference model for an AI platform that can serve many use cases across the enterprise. • Patterns for integrating AI services with your existing microservices, data lake/lakehouse, and identity/security stack. • Practical SLOs and observability signals for monitoring AI systems in production (quality, drift, and cost). • A staged roadmap to evolve from ad‑hoc AI projects to a governed, reusable AI platform.
How AI is Changing Offensive Security and Continuous Attack Simulation
-How AI is lowering the barrier to advanced offensive capabilities -Why continuous attack simulation is replacing traditional pentesting models -How modern attackers operate in an AI-augmented landscape -How platforms like Aikido Security enable developers and security teams to continuously identify and fix vulnerabilities before attackers exploit them This is not theory. This is how offensive security is evolving in practice, and what you must do to stay ahead.
Agent Attacks Beyond the Policy and Identity Layer
• The hidden risks in AI agent architectures • Common exploits and attack patterns • Real-world mitigation strategies • AI governance frameworks and kill chains • Practical tools to get started • Separating hype from reality
Getting the Most Out of Security Testing: A Hacker's Perspective
Offensive security assessments are where defenders get a front-row seat to how real operators hunt, pivot, chain weaknesses, and break assumptions. This isn’t about checking boxes or dumping CVEs into a report; it’s seeing how attackers actually move through an environment once they get a foothold. Which detections fail. Which trust relationships collapse. Which “low-risk” issue quietly turns into domain admin at 2 a.m. And now AI is pouring fuel on both sides of the fight. Recon is faster. Phishing is sharper. Social engineering scales. Exploit development and research cycles are compressing. Attackers are iterating faster than most organizations can triage. Defenders don't just need more visibility; they need to understand attacker tradecraft well enough to anticipate how modern intrusions unfold before the alarms start firing. In this session, I’ll share lessons learned from over a decade of penetration tests and adversary simulations, breaking down the symmetry between offense and defense through the lens of real attack paths, privilege escalation chains, operational blind spots, AI-enabled abuse cases, and the kinds of mistakes attackers love finding in mature environments. Whether you’re running a SOC, leading a security program, or trying to get more than another PDF out of your next pen test, this talk is about understanding how attackers actually think and how defenders can weaponize that perspective to become harder targets.
The Skill Wave: The AI threat we aren't preparing for
-Current AI adoption by both criminals and defensive vendors is accelerating the existing speed/scale threat, not signaling where the threat is heading. The genuinely new risk is the skill wave: cheap stealth, patience, and adaptive tradecraft available to financially-motivated actors for the first time. -"Penetration depth" determines who gets hit by which wave; shallow organizations get savaged by speed and scale, while deep organizations are protected only as long as traversing depth requires expensive human skill, a condition AI is rapidly eliminating. -AI-augmented attackers won't need domain admin. DA is a shortcut that lets shallow attackers achieve impact despite limited depth, but attackers operating at depth won't route through the security bottlenecks we've built around high-criticality accounts. Metrics like "domain admin in n seconds" assume a shallow attack model where DA equals impact, and as AI makes depth cheap to traverse, benchmarks built around positional speed become dangerously misleading. -Speed, scale, and skill are reinforcing dimensions (scale creates emergent skill through cross-target intelligence, skill enables sustainable scale by preventing eviction) and their interactions will generate novel payout models that don't exist in our current threat taxonomy, just as ransomware emerged unpredictably when its enabling conditions converged.
IoT HackBots: AI-Powered Hardware Hacking Tool Development
Large language models (LLMs) have already changed the game in offensive security. Tool calling systems like Claude Code allow security researchers to discover zero days and weaponize vulnerabilities faster than ever. However, one class of systems have been out of reach: IoT devices. This session will explore the development of Claude Skills that interact with traditional hardware hacking tools. This tool access provides device context to LLMs that is critical to vulnerability discovery. These Skills allow LLMs to access UART consoles, probe unknown signals with logic analyzers, and obtain root shells over a live network. We will also discuss the potential risk-driven imbalance between offensive and defensive AI adoption.
Same Side, Different Speeds: Rethinking Vulnerability Disclosure in the Age of AI
Vulnerability discovery and development have progressed remarkably in recent years, aided and abetted by broad adoption of AI. The volume of new vulnerabilities and exploits flooding the technology ecosystem continues to grow, but many of the human-led systems designed to ingest, validate, and standardize those vulnerabilities have stagnated both technically and philosophically. This asymmetry contributes to antagonistic relationships between good-faith security researchers and technology suppliers, exacerbated by misaligned incentives that encourage quantity over quality and rapid discovery over deeper remediative action. This talk will explore steps that software suppliers and vulnerability researchers can take to improve bilateral disclosure experiences and deliver better outcomes in a rapidly changing security world — starting with an acknowledgment that they’re on the same side.
Rise of the Pond Master - Jailbreaking tales with big ducks energy
In the ever-evolving arms race between AI developers and the underground hacking community, a new breed of jailbreak artist has emerged — one fueled by absurdity, persistence, and unapologetic "big ducks energy." This talk chronicles the rise of Pond Master, an unorthodox yet highly effective jailbreaking methodology that turns LLM guardrails into playgrounds through creative prompting, psychological manipulation, and relentless experimentation.
Ghosts in the Machine - The Therac-25 Affair
In 1985, a software race condition in a radiation therapy device called the Therac-25 began quietly killing cancer patients by delivering radiation doses up to 100 times the therapeutic level. Six patients were overdosed, and three died. The root cause was nothing exotic: reused code, removed hardware interlocks, a single unreviewed programmer, and a manufacturer so confident in their software that they dismissed every patient complaint for nineteen months. Almost forty years later, the healthcare sector is deploying millions of connected medical devices such as insulin pumps, infusion systems, patient monitors (telemetry), diagnostic imaging, connected laboratory equipment and implantables. A surprising amount of which repeat every structural failure that the Therac-25 made famous. Software-only safety controls. Legacy firmware reused without re-testing. Security alert fatigue. This talk takes attendees inside the Therac-25 Affair with deep technical details of the race conditions, the integer overflows, the missing hardware interlocks, and the regulatory blind spots.
Pod-tential for Disaster: Hacking Kubernetes from Pod to Cluster
Throughout the session, you’ll see exactly how bad actors chain these configuration flaws to move laterally, escalate privileges, and ultimately breach critical components. We’ll wrap up by discussing straightforward fixes and best practices that can thwart such attacks in your own deployments. Whether you’re a security pro or just getting started with container orchestration, you’ll come away with a clear understanding of how Kubernetes implementations get hacked—and how to keep them secure. If you want to grasp container security by breaking it first, this talk is for you.
Agents Don't Collaborate Like Humans. Stop Building Like They Do.
-Agents aren't users of your software — they're operators. Build infrastructure accordingly. -Every time you move context from a product surface to an open interface, your agent system gets more capable. This compounds. -The line between good and bad agent infrastructure isn't local vs cloud — it's whether an agent can operate on the interface directly without product-imposed constraints. -The MCP ecosystem is solving connector logistics. The harder problem is environment design — giving agents space to think, not just endpoints to call. -A practical litmus test for your stack: if an agent can't query it, extend it, or rewrite it, it's working against you.
From Librarian to SRE: Architecting Multi-Agent Systems for Autonomous Cloud Remediation
-The Shift: Learn to transition your AI strategy from a "Knowledge Librarian" (RAG) to a "Digital SRE" (Agentic). -The Blueprint: A modular architecture for a Multi-Agent System specialized for cloud infrastructure. -Safety Protocols: Practical methods for implementing "Guardrails" and "Supervisor" agents to ensure autonomous actions remain safe. -Optimization: Techniques for managing massive log data and maintaining state across complex, multi-step agent operations.
Reverse Engineering EDR Kernel Drivers with AI
Production EDR kernel drivers represent the ultimate endpoint security boundary, yet their complexity—often exceeding 5MB of opaque code—renders systematic manual reverse engineering impractical. This talk introduces an agentic AI workflow that fundamentally shifts the economics of kernel analysis by leveraging Cursor IDE and the Model Context Protocol (MCP) to automate the mechanical burden of IDA Pro-based reverse engineering. We will demonstrate a repeatable, phase-based methodology—covering PE triage, IOCTL surface enumeration, and automated documentation—that has successfully mapped thousands of functions across multiple production EDR stacks. Attendees will gain a transferable framework for rapidly identifying BYOVD-relevant attack surfaces and validating security trust boundaries, moving beyond vendor marketing claims to empirical, AI-assisted analysis of the security tools they rely on daily.
The CoreBreak Attack: Turning AI Agents into Credentials Exfiltration Vectors
1. A security-focused deep dive into managed AI agent infrastructure, how AWS Bedrock AgentCore works under the hood, the security assumptions baked into it, and exactly where its trust boundaries collapse. 2. Full code and a live walkthrough of discovering credential exfiltration vulnerabilities in both the Browser and Code Interpreter tools, then chaining them into CoreBreak, a realistic, fully automated attack that goes from a single webpage visit to complete credential compromise without detection. 3. A concrete defensive playbook for the agent era: why traditional I/O guardrails fail when your tools are the attack surface, and how to architect around it using zero-trust boundaries, layered isolation, and least-privilege enforcement, with actionable steps you can apply today. 4. Reconnaissance methodology showing how to identify exposed AgentCore deployments in the wild, demonstrating that this isn't theoretical, organizations are already running vulnerable agent infrastructure in production. 5. A new security mental model: in the agent world, code execution and browser access aren't vulnerabilities to patch - they're features. The entire security paradigm needs to shift from "prevent RCE" to "assume RCE and contain the blast radius."
No Encryption Required: Why Modern Ransomware Bypasses Everything and What DFIR Finds When It Does
1. Identify the four MFA bypass and initial access techniques ransomware affiliates actively use (AiTM proxy phishing, session token replay, push fatigue, and social-engineered RMM tool installs) and determine which one was used from post-incident forensic artifacts 2. Recognize BYOVD as a pre-attack setup step, not a novel technique, and detect it through driver load auditing and EDR telemetry gap analysis rather than relying on blocklists 3. Scope exfiltration forensically when encryption-less extortion through legitimate cloud services defeats both DLP and backup strategies and no ransomware binary exists 4. Deploy honeycreds, canary files, and canary API keys as detection controls that generate zero false positives and function independently of endpoint agents, MFA, and network monitoring 5. Map each "comfort blanket" control to the specific deception-based detection that covers its known bypass, with a concrete deployment plan executable within one week
Breaking the Stream: Real-Time AI Model Exploitation and Defense Strategies
1. Practical exploitation skills: Hands-on understanding of 5+ AI attack techniques including real-time streaming exploits, with code examples and tools they can use to test their own systems 2. Actionable defense playbook: A comprehensive security framework with specific controls for streaming AI, including token-level validation, real-time monitoring configurations, and circuit breaker implementations 3. Real-world threat intelligence: Knowledge of active attack campaigns targeting streaming AI systems, TTPs used by threat actors, and indicators of compromise for streaming-specific attacks 4. Security testing toolkit: Access to open-source tools, scripts, and methodologies for penetration testing streaming AI systems, including WebSocket/SSE security testing frameworks 5. Streaming AI security architecture: A structured approach to secure real-time inference deployments, including edge protection, rate limiting strategies, and monitoring for streaming endpoints
Teaching a New Dog Old Tricks: Hacking FIDO Passkeys
- The audience will know the intended purpose of passkeys and a contextual history of the FIDO2 protocol - The audience will see examples of vulnerabilities and patterns of weakness, and be wary of magical claims about passkey security - When deploying passkeys in the enterprise, the audience will have a working threat model and know how to think about configuration and vendor selection
The Underground Happy Hour
Drinks, demos, and disruption. The Underground Happy Hour brings together attendees, exhibitors, and speakers in the most energetic part of the event floor.
Underground Exhibits
Open all day. Think less trade show, more operator playground. The RBLN Underground brings together cutting-edge companies, interactive experiences, and the technology driving the next wave of disruption.
Registration and Check In
Doors open. Grab your badge and coffee.
Mission Modernization: Cloud, Security, and Scale in Federal IT Panel
CIOs, CISOs and CTOs experienced at working with and for the federal mission have a candid conversation about what it actually takes to modernize at scale inside government. This expert panel will share insights and perspectives gained from being at the leading edge of 'the work' from cloud adoption, enterprise architecture shifts, and security transformation in highly regulated, mission-critical environments. Hear how experts balance innovation with risk, manage legacy system constraints, navigate procurement and policy hurdles, and build resilient, zero-trust-aligned infrastructures.
6 Hard Lessons from Zero Trust Deployments: What the Field Is Actually Seeing
Zero Trust has quickly moved from security concept to board-level mandate. Yet the reality inside most organizations is far messier than the architecture diagrams suggest. Based on several hundred interviews with IT and security practitioners as well as the analyst community, this session explores what actually happens when organizations attempt to implement Zero Trust. The findings are sobering: roughly 60–70% of Zero Trust initiatives stall before reaching maturity, and fewer than 20% achieve a fully realized Zero Trust architecture. Rather than focusing on theory or vendor frameworks, this session examines seven hard lessons from the field, highlighting both the successes and the failures organizations encounter along the way. Topics include why many Zero Trust initiatives stall, where organizations underestimate complexity, the architectural decisions that matter most, and what successful deployments do differently. This presentation is not a product pitch. Instead, it’s a candid discussion of the hard realities of Zero Trust deployment, grounded in the experiences of practitioners across hundreds of organizations. Attendees will leave with practical insights into what works, what fails, and how to move a Zero Trust initiative from concept to operational reality.
Diving into how Frontier AI models are coming up with vulnerabilities across the global supply chain, and how defenders can utilize AI to keep up. In this session, you'll learn: - To make sense of the hype and marketing around every new frontier model announcement - To identify strengths and weaknesses of models based on the domains they are put against (offensive vs defensive) - To clearly point out the chokepoints in your defense strategy (hint - it's not detection, nor the ability to orchestrate and prioritize tickets/tasks) - To put AI models into use for remediation tasks to fully close a vulnerability in code, in production.
From Librarian to SRE: Architecting Multi-Agent Systems for Autonomous Cloud Remediation
-The Shift: Learn to transition your AI strategy from a "Knowledge Librarian" (RAG) to a "Digital SRE" (Agentic). -The Blueprint: A modular architecture for a Multi-Agent System specialized for cloud infrastructure. -Safety Protocols: Practical methods for implementing "Guardrails" and "Supervisor" agents to ensure autonomous actions remain safe. -Optimization: Techniques for managing massive log data and maintaining state across complex, multi-step agent operations.
Generative AI, Cybersecurity, and Ethics
As generative AI rapidly reshapes the cybersecurity landscape, it is transforming both attack capabilities and defensive strategies. In this session, Dr. Ray Islam explores how AI is accelerating cyber offense through automated phishing, polymorphic malware, adversarial prompt engineering, and deepfake enabled social engineering, while simultaneously empowering defenders with intelligent threat detection, autonomous SOC workflows, and predictive risk modeling. Beyond the technical arms race, this talk addresses the ethical fault lines emerging in AIML driven security: algorithmic bias in detection systems, privacy implications of large-scale surveillance models, AI governance gaps, and the responsible deployment of autonomous cyber tools.
The Cybersecurity Industrial Complex (CIC)
The root causes of successful cyberattacks that occurred over 25 years ago are still around today. The cybersecurity industry hasn't done a very good job in correcting these flaws. What have we been doing this past quarter century? There appears to be no incentive for the CIC to fix these flaws.
The Future SOC: Human + Agent Collaboration at Scale
Security Operations Centers are under increasing pressure as alert volumes grow and environments become more complex. While AI copilots have improved analyst productivity, they have not fundamentally changed how the SOC operates. A new model is emerging: the Agentic SOC. In this model, AI agents act as active participants in security operations, handling triage, enrichment, correlation, and reporting, while humans focus on judgment and oversight. This creates a hybrid workforce that can operate with greater speed, scale, and consistency. This session breaks down a practical model for building the next-generation SOC based on three pillars: context, coordination, and control. Attendees will learn how to move beyond isolated automation toward coordinated, governed AI systems, and what it takes to operationalize AI agents safely and effectively at enterprise scale.
Lunch and Networking
IoT Village
Join IoT Village for an interactive hackalong session exploring application security concepts, offensive techniques, and practical approaches to securing modern connected systems.
Breaking the Agent: Securing Endpoint AI Agents with OpenClaw in Production Environments
1. How to build a practical threat model for endpoint AI agents like OpenClaw, including concrete attack vectors and failure modes. 2. How common OpenClaw misconfigurations can be exploited, and how to prevent them using least-privilege and runtime controls. 3. A reference security architecture for production deployments, including policy enforcement, monitoring, and explainability. 4. How to operationalize continuous governance for agentic AI, moving beyond static assessments to runtime assurance.
1. Differentiate between eBPF hook types — tracepoints, kprobes, uprobes, and LSM hooks — and select the right one for a given security monitoring or enforcement use case 2. Build eBPF security programs in Rust using the Aya framework without writing C or depending on BCC 3. Implement LSM BPF hooks (bprm_check_security, socket_connect, security_task_kill) to block threats at the kernel level before syscalls complete 4. Navigate eBPF verifier constraints in practice — stack limits, bounded loops, per-CPU arrays, and kernel struct offset portability across kernel versions 5. Detect fileless malware by tracing memfd_create syscalls and capture TLS plaintext via OpenSSL uprobes without a MITM proxy
Vulnerability research is an extremely labor-intensive discipline in cybersecurity. Modern software poses a significant challenge, with codebases encompassing millions of lines and complex, fast-evolving attack surfaces that outpace manual analysis. Consequently, traditional vulnerability research faces a difficult choice: either conduct deep analysis over a narrow scope or achieve shallow coverage across a broad attack surface. Large Language Models (LLMs) show great promise due to their remarkable capabilities in code comprehension, pattern recognition, and technical reasoning. However, a naive application of LLMs to security research often yields unreliable results. Models may hallucinate vulnerabilities, overlook essential context, or fail to rigorously validate their findings. The central issue is not if AI can aid vulnerability research, but rather how to structure that assistance to genuinely enhance human expertise without replacing human judgment. In this talk we will share with you our journey that started with creating a reliable autonomous software development infrastructure and how we applied what we learned in order to create a nearly fully-automated vulnerability research and exploit development platform that also creates actionable detections for our product.
SIR-Bench: Evaluating Investigation Depth in Security Incident Response Agents
•Investigation vs. Classification: Learn the critical difference between an AI that correctly triages alerts (97.1%) and one that conducts genuine forensic investigation (41.9% novel finding coverage)—and why both metrics matter for production deployment •Adversarial Evaluation Design: Implement an LLM-as-Judge that inverts the burden of proof, preventing the confirmation bias that accepts alert repetition as valid investigation •Realistic Benchmark Generation: Use the OUAT methodology to create measurable ground truth from real incident patterns without exposing sensitive production data •Performance by Attack Category: Understand why Unauthorized Access investigations yield deep findings (47.9% hit 7+ novel discoveries) while •Malicious File Execution struggles (1.9%)—and what this means for agent deployment decisions •Production Readiness Framework: Apply the M1/M2/M3 metric framework to evaluate whether your AI security tools are performing genuine investigation or sophisticated pattern matching
Speak Security With a Business Accent: Influence, Trust, and Why Cyber Keeps Losing the Room
• How to reframe security conversations so people actually listen • Techniques for gaining buy-in without fear, authority abuse, or manipulation • Practical methods for translating cyber risk into business value • How to build trust with peers, leadership, and non-technical stakeholders • Why improving communication improves security outcomes
• A stepwise investigation workflow for suspected DPRK‐linked workers using endpoint, network, and OSINT evidence • A prioritized artifact list and how to validate each signal to avoid false positives • Infrastructure and behavioral patterns repeatedly observed across cases and how to test for them • Pre‐hire and post‐hire detection design, including telemetry requirements and escalation criteria • Case‐based lessons learned and failure modes to avoid in real investigations
• If they have drunk the Kool-Aid aka have they fallen victim to cybersecurity misconceptions and lies. • Why technology is the real problem. • Why zero trust is the only way.
For years, cybersecurity has been framed around the risk of loss. But business leaders don’t make decisions this way and are focused on growth outcomes. So what happens when we start talking about cybersecurity as a business enabler? This talk explores how to shift the conversation from cost and complexity to revenue, productivity, and competitive advantage. The audience will learn how security investments can unlock new business value, strengthen executive alignment, and turn security teams from perceived blockers into growth partners through real world examples.
The Gauntlet Presentations and Voting
The Gauntlet Winner Announced and Closing Remarks
AfterFuse Party
Where elite minds, cutting-edge ideas, and next-level experiences collide in an exclusive, invite-only after hours event. This is not just a party – it’s an explosive fusion of technology, networking, and sensory indulgence like you’ve never seen before.
Speakers

Adam Darrah
Vice President of Intelligence · ZeroFox

Adam Vincent
Founder and CEO · Bricklayer AI

AJ Nash
CEO · Unspoken Security

Amit Serper
Lead Security Researcher · CrowdStrike

Angel Smith
President of Global Public Sector · Virtru

Avi Rubin
Professor Emeritus, Johns Hopkins University & Founder and Managing Director, Harbor Experts · Harbor Labs

Aviyam Ivgi
Founder · Stealth

Caitlin Condon
VP of Security Research · VulnCheck

Cristian Leo
Data Scientist · AWS

Daniel Begimher
Senior Security Engineer · AWS

David Etue
Chief Strategy Officer · Cyberbit

David Girvin
AI Security Architect · Sumo Logic

David Tohn
CEO · BTS Software Solutions (BTS)

Dawn-Marie Vaughan
Cybersecurity Global Offering Lead · DXC Technology

Dr. Aleksandr Yampolskiy
Co-Founder and CEO · SecurityScorecard

Dr. Louis DeWeaver
Cyber Security Consultant · Marsh McLennan Agency

Dr. Ray Islam
Adjunct Professor (NLP/ML) · George Mason University

Geoff Robinson
Principal Consultant · ivision

Harshit Kohli
Senior Technical Account Manager · AWS

Hedi Ingber
Founder · Stealth

Iftach Ian Amit
Co-Founder and CEO · Gomboc.ai

Jackson Reed
Founder · Barding Defense

Jacob Gajek
Principal Security Researcher · eSentire

James Foster
CEO · eSentire

James Hirmas
Chief Strategy Officer (CSO) · Easy Dynamics

Jamie Tolles
Vice President, Incident Response · IDX

Jessica Gulick
CEO & Founder · Katzcy

Joel Bauman
Founder and CEO · Synqly

John Spiegel
Field CTO · HPE

Josh Mason
Solutions Architect · Synack

Justin Chavez
Head of Applied AI Engineering · Inkeep

Kevin Kiley
CEO · Airia

Kyle Waggoner
CISO · Perdue Farms

Larry Letow
Chairperson · Cyber Security Hall of Fame

Lavnish Talreja
Data Engineer · McKinsey & Company, Inc.

Matt Brown
Founder & Principal Consultant · Brown Fine Security

Michael Baader
Vice President, Divisional Information Security Officer · Capital One

Michal Bazyli
Founding Cybersecurity Researcher · Cracken

Mike Price
Vice President, Product & Engineering · VulnCheck

Nish Majmudar

Patricia Titus
Field CISO · Abnormal AI

Rakesh Pal
Sr. Technical Account Manager · Amazon Web Services

Randy Marchany
Chief Information Security Officer · Virginia Tech

Ryan Hasmatali
Software Developer · eSentire

Samantha St-Louis
VP of AI App Innovation · Atmosera

Scott Miller
Security Consultant and Penetration Tester · Accenture Security

Sean Satterlee
Senior Principal Penetration Tester · Device Recon Labs

Sounil Yu
Creator, Cyber Defense Matrix and Chief AI Officer, Knostic

Steven Solomon
Cybersecurity Consultant · American Cyber

Tanvi Desai
Sr. Cloud Consultant · Google Cloud

Travis Lowe
Cloud Security Research · CrowdStrike

Venice Goodwine
Independent Board Member, USAF CIO (R)

Vincent Swolfs
Director of Hacking & CISO · CISA.one

Vishavjit Singh
Senior Threat Intelligence Researcher · eSentire

Wes Wright
Senior Security Consultant · Bishop Fox

Yonatan Perry
Director of Engineering · CrowdStrike

Yuthvek MJ
Security Researcher
Location
RBLN East takes over the Hyatt Regency Reston. Sessions, workshops, and hands-on chaos happen on the second floor. Room block and parking info below—get ready to make your mark.
Transportation & Parking
- •Washington Dulles International Airport (IAD): 6 miles
- •Ronald Reagan Washington National Airport (DCA): 24 miles
- •Metro (WMATA) Reston Town Center West Station: 6 blocks
- •Parking: 25% discount on overnight and daily self-parking for all RBLN attendees









