AI Security Has a Shipping Problem

Thesis: The AI security industry produces frameworks and guidelines but almost no one ships working tools that practitioners can deploy today. The gap between “risk identified” and “risk mitigated” in AI security is wider than any other area of cybersecurity I’ve worked in. We have more frameworks per deployed tool than any domain in the history of information security. And the frameworks keep coming while the tools don’t. The Evidence 1. OWASP published the Agentic Top 10 in late 2025. No tools enforce it. ...

March 19, 2026 · 5 min · Rex Coleman

The Agent Security Gap Nobody's Talking About: Skills Run Every Heartbeat

Thesis: Everyone’s worried about prompt injection, but the real agent attack surface is third-party skills — they execute persistently on every heartbeat cycle, not once per conversation. I keep having the same conversation. Someone asks about agent security. I say “third-party skills.” They say “you mean prompt injection?” No. I mean the code that runs inside your agent 144 times per day, with full access to your agent’s memory, context, and credentials, that you installed from a marketplace where one in five entries is actively malicious. ...

March 19, 2026 · 7 min · Rex Coleman

How I Govern AI-Assisted ML Projects

After four ML projects at Georgia Tech, I’d run 14 manual audit cycles with 30+ findings each. The governance wasn’t the problem — the manual enforcement was. So I built govML. The Problem Every ML project needs governance: reproducible experiments, documented decisions, data integrity checks, fair comparisons. But enforcing governance manually is a workflow killer. My unsupervised learning project had 7 audit cycles with 49+ findings. The RL project had 14 cycles with 30+ findings. I was spending more time auditing than experimenting. ...

March 14, 2026 · 5 min · Rex Coleman
© 2026 Rex Coleman. Content under CC BY 4.0. Code under MIT. GitHub · LinkedIn · Email