Bug Bounty Village × CTF.ae
Official DEFCON 33 Contest / Las Vegas

Building the most realistic bug bounty CTF ever run at DEFCON

We invented the bug bounty CTF format and delivered it at DEFCON 33. The result: GeneQuest, a vulnerable lab so realistic that players forgot they were in a competition.

26+
Vulnerabilities
10
Microservices
600+
Participants
3
Days Live

The Challenge

The Bug Bounty Village CTF is an official DEFCON contest, one of the recognized competitions at the world's most influential hacker conference. The Village's mission is to bridge the gap between security researchers and vulnerability disclosure platforms through hands-on technical challenges. They needed a CTF that felt like real bug bounty, not a typical CTF.

Bug Bounty Village co-founders Harley Kimball and Ariel Garcia trusted us with a concept that had never been done before: a bug bounty CTF. We had to invent the game logic from scratch, balancing the realism of bug bounty with the gamification of a CTF. We met with triagers from HackerOne to understand their workflow. We studied real genomics companies. And we built something that made experienced bug bounty hunters say it felt like the real thing.

GeneQuest

GeneQuest is a fictional direct-to-consumer genomics company inspired by 23andMe that sells DNA kits and delivers ancestry, trait, and health-risk insights. We built it from the ground up as a fully functional web application with a complete brand identity: marketing copy, product pages, user dashboards, and a research portal. Every detail was designed to make players forget they were in a CTF.

Under the surface, GeneQuest spans 10 microservices with 80+ endpoints, built across 6 frameworks and 4 programming languages. The 26+ vulnerabilities range from simple IDORs and auth bypasses to multi-step attack chains requiring correlation between distant sources and sinks. Every vulnerability was modeled after real bug bounty reports.

GeneQuest CMS Svelte frontend API layer Auth / Blogs / Widgets 4 access levels Mailbox Custom mail client CMS + Portal integration LLM + RAG FastAPI / ChromaDB Genebot + Researcher Agent DNA Analyzer Analysis microservice Report generation Internal Portal Nest.js / WebSockets Crypto failures 16+ resources Granular access control Buckets Blob storage + CDN Infrastructure *.genequest.io AWS / 50K containers
10
Microservices
80+
Endpoints
6
Frameworks
4
Languages
1.15M
Lines of Code
15,530
LLM Prompts

LLM-Native Challenges

GeneQuest was built LLM-native. Two AI systems were embedded directly into the application, not bolted on as separate challenges, but integrated the way modern web apps actually use AI.

Genebot

A RAG-powered chatbot answering questions about the platform. No direct flags, but it could leak internal documentation needed to solve other challenges. Defended by hardened system prompts and HiddenLayer's AIDR prompt injection detector.

Internal Researcher Agent

An LLM agent with 5 tools: directory listing, file reading, knowledge base search, and HTTP requests. Two flags: one for extracting the system prompt, one for bypassing guardrails to achieve path traversal through the agent's file read tool.

The AI Agent Experiment

We challenged Ethiack to deploy their autonomous security agent against GeneQuest to see how an AI would perform against a target designed for humans. The results were striking.

In just 4 hours, the agent found 4 flags, including an unintended local file inclusion vulnerability that no human player discovered across the entire 3-day event. The best human researcher found 14 out of 26 vulnerabilities over 3 days (53%). The agent found its 4 in a fraction of the time.

This experiment directly informed our AI Benchmarking service. GeneQuest proved that purpose-built vulnerable labs are the most effective way to evaluate AI model capabilities in realistic conditions. We now offer this as a dedicated service: building custom vulnerable environments for organizations who need to understand what their AI can and cannot do before deploying it.

4hrs
Agent Runtime
4
Flags Found
1
Unintended Bug (no human found)

Build Timeline

Mar 28

First Meeting with BBV

Scoped the concept of a bug bounty CTF, something that had never been done before.

Apr 16

Research

Researched real genomics companies, mapped their tech stacks, marketing, and products.

Apr 28

Building GeneQuest

Engineered 10 microservices across 6 frameworks and 4 languages. Built a fully functional environment from scratch.

Jun 15

Vulnerability Injection

Designed 26+ vulnerabilities from real bug bounty reports. Built multi-step chains across microservices.

Jul 1

Brand & Polish

Created full brand identity, product pages, and user flows. Made GeneQuest indistinguishable from a real startup.

Jul 20

Infrastructure Ready

Rewrote deployment and reverse proxy to handle 50,000+ concurrent containers (5,000+ players).

Aug 8-10

DEFCON 33

Ran three days live at the Bug Bounty Village in DEFCON 33, Las Vegas. 305 flag submissions. 20 out of 33 flags found.

What Players Said

"It was super realistic."

DEFCON 33 participant

"It felt like real bug bounty."

DEFCON 33 participant

"I had to enumerate really hard."

DEFCON 33 participant

Event Stats

305
Flags Submitted
20/33
Flags Found
15,530
LLM Prompts
<45min
Avg Review Time
1,157
Deployment Hours
53%
Best Human Score

Partners

CTF Triage Partner
PortSwigger

PortSwigger

Provided a dedicated team of triagers who reviewed and validated vulnerability reports submitted by hunters throughout the event.

AI Challenge Sponsor
HiddenLayer

HiddenLayer

Their AIDR prompt injection detector was deployed as a defense layer on GeneQuest's LLM systems.

Need a vulnerable lab?

We build realistic vulnerable environments for CTFs, training, and AI benchmarking. Custom targets at any scale.

Get in touch