50+ ML Governance Templates: What It Takes to Govern AI at Scale

govML ships 57 templates, 10 profiles, and 27 code generators — battle-tested across 10 frontier projects. It’s the governance framework I built after discovering that every ML project I ran needed the same 30+ documents, and every team was reinventing them from scratch. Why this matters ML governance is the tax you pay to make AI systems auditable, reproducible, and trustworthy. Most teams either skip it (and pay later in failed audits and unreproducible results) or spend weeks building bespoke governance documents for each project. Neither approach scales. govML makes governance a 15-minute setup step instead of a multi-week overhead, by encoding best practices into templates that generate project-specific documentation from configuration. ...

March 19, 2026 · 2 min · Rex Coleman

Apply Adversarial Control Analysis to Your ML System in 3 Steps

Note (2026-03-19): This was an early exploration in my AI security research. The methodology has known limitations documented in the quality assessment. For the current state of this work, see Multi-Agent Security and Verified Delegation Protocol. Problem Statement You have deployed an ML model and someone asks: “Is it robust to adversarial attack?” You do not have a principled way to answer. You could fuzz every input, but that is expensive and tells you nothing about which attacks are structurally impossible versus which are just untested. You need a method that maps the attack surface before you start testing. ...

March 19, 2026 · 7 min · Rex Coleman

One Principle, Six Domains: Adversarial Control Analysis for AI Security

Note (2026-03-19): This was an early exploration in my AI security research. The methodology has known limitations documented in the quality assessment. For the current state of this work, see Multi-Agent Security and Verified Delegation Protocol. I started with one question: if a network attacker can only control some features of network traffic, shouldn’t our IDS defenses focus on the features they can’t control? That question became a methodology. I called it adversarial control analysis (ACA) — classify every input by who controls it, then build defenses around the uncontrollable parts. It worked on intrusion detection. So I tried it on vulnerability prediction. Same result. Then AI agents. Then cryptography. Then financial fraud. Then software supply chains. ...

March 16, 2026 · 4 min · Rex Coleman
© 2026 Rex Coleman. Content under CC BY 4.0. Code under MIT. GitHub · LinkedIn · Email