Antivirus for AI Models: Behavioral Fingerprinting Detects What Static Analysis Misses

How do you know a model downloaded from Hugging Face hasn’t been backdoored? Static analysis tools like ModelScan check for serialization exploits and known payload patterns. They catch the obvious attacks. But a model poisoned through training data – one that behaves normally on 99.9% of inputs and activates a backdoor only on a specific trigger – passes every static check. The weights look fine. The architecture is standard. The malicious behavior is invisible until the trigger fires. ...

March 16, 2026 · 5 min · Rex Coleman

Adversarial ML on Network Intrusion Detection: What Adversarial Control Analysis Reveals

After 15 years at Mandiant watching network intrusion detection systems fail against real adversaries, I built one — then tried to break it. The finding that surprised me: the model architecture barely matters for robustness. What matters is which features the attacker can manipulate. The Setup I trained Random Forest, XGBoost, and Logistic Regression classifiers on the CICIDS2017 dataset (2.83M network flow records, 78 features, 15 traffic classes). Standard ML-on-IDS — nothing novel yet. ...

March 14, 2026 · 4 min · Rex Coleman

How I Govern AI-Assisted ML Projects

After four ML projects at Georgia Tech, I’d run 14 manual audit cycles with 30+ findings each. The governance wasn’t the problem — the manual enforcement was. So I built govML. The Problem Every ML project needs governance: reproducible experiments, documented decisions, data integrity checks, fair comparisons. But enforcing governance manually is a workflow killer. My unsupervised learning project had 7 audit cycles with 49+ findings. The RL project had 14 cycles with 30+ findings. I was spending more time auditing than experimenting. ...

March 14, 2026 · 3 min · Rex Coleman

Why CVSS Gets It Wrong: ML-Powered Vulnerability Prioritization

After 15 years of incident response at Mandiant, I watched security teams burn countless hours patching CVSS 9.8 vulnerabilities that never got exploited — while CVSS 7.5s got weaponized and led to breaches. CVSS measures severity. Attackers measure opportunity. I trained an ML model on 338,000 real CVEs to find out what actually predicts which vulnerabilities get exploited in the wild — and the answer is not what CVSS thinks it is. ...

March 14, 2026 · 5 min · Rex Coleman

© 2026 Rex Coleman. Blog content licensed under CC BY 4.0. Code under MIT.