Practitioner¶
Who this path is for¶
You want to ship secure AI systems, not just read about them. You're comfortable opening a terminal, writing Python, and running a test harness against your own code. You've probably built classical web apps before and want the build → break → secure arc applied to agents, RAG, and MCP.
Prerequisites¶
- Comfortable with Python
- Basic security literacy (you know what SQL injection is)
- A Claude/OpenAI/open-weight API key to run the examples as you read
Sequence¶
- 01 Foundations — Start here — the three mental models you need before any security reasoning holds.
- 02 Building LLM Apps — Start here — Anthropic's canonical agent patterns; Huyen's framework.
- 03 MCP — Start here — the protocol layer you'll be both using and securing.
- 02 Building LLM Apps — Go deeper — multi-agent and advanced RAG patterns.
- 04 Prompt Injection — Start here — the flagship threat; read Willison's canonical post first.
- 04 Prompt Injection — Go deeper then 04 Prompt Injection — Watch — deep on defenses, then watch end-to-end exploitation.
- 05 LLM Vulnerabilities — Start here — OWASP LLM Top 10 plus MITRE ATLAS to round out the taxonomy.
- 06 Agentic Security — Start here — Meta's Rule of Two, excessive agency, trust boundaries.
- 07 Red Teaming — Start here and 07 Red Teaming — Watch — methodology, then the three-part hands-on walkthrough.
- 08 Attacking AI Tooling — Tools — PyRIT, Garak, DeepTeam, Promptfoo; pick one and run it.
- 03 MCP — Watch — the defensive checklist to apply to whatever you just built.
- 10 Incident Response — Start here — what to do when a probe becomes an incident.
- Optional: 12 AI for Pentesting — Start here — if the offensive side interests you, see what AI-powered pentest tooling can and can't do.
What you'll know by the end¶
- How to architect and ship a secure agentic AI system
- How to attack your own system before an adversary does
- Which tools belong in a modern AI red-team kit
- How to structure incident response when the system is the LLM
Where to go next¶
- GRC & Leadership — if you need to justify this work to a compliance team
- AI Engineer → Security — if you came from the ML side and want to fill the adversarial-thinking gap