Dual-track competition: classic CTF with AI assistance + the first dedicated AI security track at an international youth olympiad
AI agents are solving traditional CTF challenges at an accelerating rate. The data is clear -- and it shapes how we design competition for the next generation.
| Source | Date | Result |
|---|---|---|
| CAI Agent (Alias Robotics) | 2025 | #1 at Neurogrid ($50K prize), 91% solve rate; #1 at Dragos OT CTF, 94%; 99th percentile across 5 competitions |
| HackTheBox AI vs Human | Mar 2025 | 5 of 8 AI teams solved 19/20 challenges (95%); best AI team ranked 20th / 403 teams |
| Wiz Web Security Study | 2025 | Claude, GPT, Gemini all solved 9/10 web challenges (90%); cost per challenge < $1 |
| XBOW Pentesting | 2025 | 85% solve rate in 28 minutes; human expert: same rate in 40 hours (85x slower) |
| Cybench (Stanford) | 2025 | Claude Sonnet 4.5: 76.5% (doubled from 35.9% six months prior) |
| DARPA AIxCC Finals | Aug 2025 | Detection: 86%; Patching: 68%; 18 real-world vulnerabilities discovered |
| Anthropic Self-Evaluation | 2025 | PicoCTF: top 3% (297th / 10,460 teams); PlaidCTF & DEF CON Quals: 0 solved |
| Anthropic + Mozilla | Mar 2026 | Claude Opus 4.6 discovered 22 Firefox vulnerabilities in 2 weeks; 14 rated high-severity -- nearly 1/5 of all high-severity Firefox bugs remediated in 2025-26. Automatically developed working exploits for 2 vulnerabilities. |
AI saturates easy-to-medium challenges across all categories. The remaining gap is difficulty tier, not category. Expert-level challenges are shrinking every 6 months.
"Jeopardy-style CTFs have become a solved game for well-engineered AI agents."
-- Alias Robotics (CAI), after winning 5 major CTF competitions in 20255 knowledge domains -- jeopardy-style challenges -- token-limited AI
Professional security researchers already use AI assistants daily. Testing human-AI collaboration is testing a real skill that matters in the workplace.
Token-limited AI assistance allows more challenges to be attempted in the 5-hour window, producing richer score differentiation across skill levels.
Token budgets force resource-management decisions: which challenges benefit most from AI? When is manual analysis faster? A new layer of competitive strategy.
AI assistance helps bridge knowledge gaps for contestants from countries with less CTF infrastructure, while token limits prevent AI from doing all the work.
Stack layout, buffer overflows, shellcode basics
ROP chains, format string attacks, heap exploitation
Kernel exploitation, sandbox escapes, custom mitigations bypass
Symmetric/asymmetric encryption, hashing, encoding
RSA attacks, AES side-channels, protocol weaknesses
Elliptic curve attacks, zero-knowledge proof flaws, post-quantum crypto
File carving, metadata analysis, log analysis
Memory forensics, network packet analysis, disk imaging
Anti-forensics detection, timeline reconstruction, malware artifacts
x86/x64 assembly, disassembly tools, static analysis
Dynamic analysis, anti-debugging, bytecode RE (JVM/.NET)
Obfuscation, custom VM RE, firmware analysis
SQL injection, XSS, CSRF, authentication flaws
SSRF, deserialization, OAuth attacks, race conditions
Prototype pollution, template injection, WebSocket attacks
AI is the target, not the tool -- the dedicated AI challenge track
ICO 2025 introduced AI tools into CTF competition. ICOA 2026 takes the next step -- making AI itself the challenge. Day 2 is entirely dedicated to AI security: attacking, defending, and analysing AI systems. Contestants interact with target models provided by the competition platform, not their own tools.
INDUSTRY TREND: AI SECURITY PRODUCTS
Google launches Sec-Gemini v1 — AI-powered threat analysis, root cause investigation, and vulnerability impact assessment.
Anthropic releases Claude Code Security — autonomous code auditing that found 500+ vulnerabilities undetected for decades in production open-source codebases.
OpenAI launches Codex Security — scanned 1.2M commits, discovered 10,561 high-severity issues and 14 CVEs across OpenSSH, Chromium, PHP.
Three frontier AI companies launched dedicated security products within 12 months. AI-driven vulnerability discovery is no longer experimental — it is an industry product category. CTF4AI trains the next generation to understand, evaluate, and defend against exactly these capabilities.
Prompt injection basics, input manipulation, jailbreaking fundamentals
Advanced jailbreaking techniques, model extraction, training data leakage
Adversarial ML (evasion/poisoning attacks), supply chain attacks on ML pipelines
AI-generated text identification, basic guardrail concepts
Guardrail bypass testing, output verification, watermark detection
Red-teaming LLMs, building robust AI safety filters, evaluating alignment
Deepfake detection (image/audio), AI vs human content classification
AI-generated code audit, model fingerprinting
Attribution analysis, model provenance, synthetic data tracing
Classic CTF aligns with what students are learning. CTF4AI takes them where no curriculum has gone yet.
Cybersecurity is entering secondary education globally, but exclusively at the classic/foundational level. ICO Day 1 (AI4CTF) tests exactly these skills -- making the competition directly relevant to what students are learning in school, while adding the AI collaboration dimension that reflects the modern workplace.
While classic cybersecurity is being standardised in secondary education, dedicated AI security education remains rare and fragmented. Individual courses exist at a handful of leading institutions, but no university in the world offers a complete undergraduate or postgraduate degree programme in AI Security.
| University | Course | Level | Status |
|---|---|---|---|
| Stanford | XACS134 AI Security + CS120 AI Safety | Professional + UG | Available |
| CMU | 15-783 Trustworthy AI | Graduate | Available |
| UC Berkeley | CS294 Agentic AI (Dawn Song) | Graduate | Partial |
| MIT | xPRO AI and Cybersecurity | Executive | Professional only |
| ETH Zurich | SPY Lab (Prof. Tramer) | Graduate research | Research only |
| Oxford | Security and Privacy of ML (MSc) + LASR National Lab | Graduate | Available |
| Cambridge | CSER + Leverhulme CFI (research centres) | Research | Research only |
| NUS | CS5562 Trustworthy ML (Prof Reza Shokri) | Graduate | Available |
| UNSW | Trustworthy AI for Cyber Security + ML for Cyber Security + Deep Learning for Cyber Security | Undergraduate | Available (3 courses) |
Graduate-level courses exist only at a handful of frontier institutions. The field is so new that the textbooks are still being written. A 16-year-old competing in ICOA 2026 Day 2 is engaging with material that most university students will not encounter until graduate school -- if at all.
NATIONAL INITIATIVE
In November 2025, the Australian Government established the AI Safety Institute with $29.9 million in funding — tasked with testing frontier AI models, evaluating safety risks, and sharing findings internationally. Australia is a founding member of the International Network of AI Safety Institutes. ICOA 2026 takes place in the same country that is building the institutional infrastructure for AI safety — connecting youth competition with national policy.
ICOA 2026 collaborates with leading universities and research institutions in cybersecurity and AI safety. Partner announcements coming soon.
Interested in becoming an academic partner?
Contact Us →Classic CTF is the best cybersecurity foundation for young people. The five traditional domains -- binary exploitation, cryptography, digital forensics, reverse engineering, and web security -- teach the fundamentals of how systems work and how they break. These skills are entering secondary curricula worldwide. Day 1 preserves and celebrates this foundation, while adding the modern dimension of human-AI collaboration.
AI is transforming cybersecurity faster than education can adapt. AI agents already solve 90-100% of standard CTF challenges. Google's Big Sleep found a real 0-day in SQLite. DARPA's AIxCC finalists patched 68% of vulnerabilities automatically. The question is no longer whether AI will automate traditional security tasks -- it is how quickly.
ICOA 2026's answer: don't fight the wave -- ride it. Day 1 embraces AI as a tool. Day 2 makes AI the challenge. CTF4AI is not just a competition format -- it is a statement about where cybersecurity is heading.
The students who learn to attack, defend, and evaluate AI systems today will lead the field tomorrow. No undergraduate programme teaches this yet. No other olympiad tests it. ICOA 2026 places the next generation at the frontier -- not following academic curricula, but defining the skills that curricula will eventually teach. This is what an olympiad should be: not a test of what students already know, but a challenge that pushes the boundaries of what is possible.
Aligned with international olympiad standards: IOI, IPhO, and IChO all use 2 days x 5 hours.
Each day scored separately with tasks split into subtasks and individual flags
Contestants ranked individually based on combined scores across both days
Gold, Silver, and Bronze medals awarded based on final individual rankings
Ties broken by sum of last point-increase times per day, in ascending order
Resources to help contestants prepare for both competition days
Dedicated CTF4AI preparation resources — developed in collaboration with frontier AI model companies, universities, and research institutes — coming soon.
Accredited team leaders receive dedicated support from the ICOA 2026 Scientific Committee, including: national selection process guidance, structured training curricula for both AI4CTF and CTF4AI, curated practice challenge sets, and direct communication channels for technical questions. Contact australia@ico2026.au to access team leader resources.
Technology and platform partners for ICOA 2026 competition infrastructure
Register your team for ICOA 2026 Australia