Apply Adversarial Control Analysis to Your ML System in 3 Steps

Note (2026-03-19): This was an early exploration in my AI security research. The methodology has known limitations documented in the quality assessment. For the current state of this work, see Multi-Agent Security and Verified Delegation Protocol. Problem Statement You have deployed an ML model and someone asks: “Is it robust to adversarial attack?” You do not have a principled way to answer. You could fuzz every input, but that is expensive and tells you nothing about which attacks are structurally impossible versus which are just untested. You need a method that maps the attack surface before you start testing. ...

March 19, 2026 · 7 min · Rex Coleman

Model choice matters less than feature controllability

Across adversarial ML experiments on network intrusion detection, the performance gap between the most and least robust models was less than 8%. The gap between high-controllability and low-controllability feature sets was over 40%. Model selection is a rounding error compared to feature architecture. Why this matters When teams build ML systems that face adversarial inputs — intrusion detection, fraud detection, spam filtering, malware classification — the default question is “which model is most robust?” That’s the wrong first question. The right first question is “which features does the attacker control?” ...

March 19, 2026 · 2 min · Rex Coleman

The same adversarial principle predicts robustness across 6 security domains

Adversarial Control Analysis (ACA) — the principle that system robustness depends on which features an attacker can manipulate — predicted security outcomes correctly across 6 different domains: network intrusion detection, fraud detection, vulnerability prioritization, agent security, supply chain analysis, and post-quantum cryptography migration. Why this matters Security teams typically treat each domain as its own silo with its own threat models, its own tools, and its own assessment frameworks. But the underlying adversarial dynamic is the same everywhere: an attacker controls some inputs, the defender controls others, and robustness depends on the ratio between them. ACA formalizes this into a repeatable methodology. When I applied the same feature controllability analysis across all six domains, the systems with the highest ratio of attacker-controlled features were consistently the least robust — regardless of model architecture, data modality, or deployment context. ...

March 19, 2026 · 2 min · Rex Coleman

Why AI-Powered Attacks Need Architecture-Level Defense

Thesis: Point solutions — WAFs, signature-based antivirus, rule-based SIEMs — fail against AI-powered attacks because AI attacks adapt faster than signatures update. The defense must be architectural. I’ve spent the last four months building and attacking ML-based security systems across six domains. The consistent finding is that the model you choose matters far less than the architecture you deploy it in. A well-architected defense with a mediocre model beats an unstructured defense with a state-of-the-art model — across all six domains I tested. ...

March 19, 2026 · 6 min · Rex Coleman

One Principle, Six Domains: Adversarial Control Analysis for AI Security

Note (2026-03-19): This was an early exploration in my AI security research. The methodology has known limitations documented in the quality assessment. For the current state of this work, see Multi-Agent Security and Verified Delegation Protocol. I started with one question: if a network attacker can only control some features of network traffic, shouldn’t our IDS defenses focus on the features they can’t control? That question became a methodology. I called it adversarial control analysis (ACA) — classify every input by who controls it, then build defenses around the uncontrollable parts. It worked on intrusion detection. So I tried it on vulnerability prediction. Same result. Then AI agents. Then cryptography. Then financial fraud. Then software supply chains. ...

March 16, 2026 · 4 min · Rex Coleman
© 2026 Rex Coleman. Content under CC BY 4.0. Code under MIT. GitHub · LinkedIn · Email