We Mapped 8 AI Security Research Projects to OWASP, NIST, and MITRE — Here's Where the Gaps Are

Five of eight research projects map to OWASP LLM01 (Prompt Injection). Three OWASP categories — LLM03 (Supply Chain), LLM07 (System Prompt Leakage), and LLM08 (Vector/Embedding Weaknesses) — have zero research coverage. That gap tells you where the next round of experiments needs to go. I published a full standards mapping that cross-references 8 original AI security research projects against four frameworks. The mapping covers OWASP Top 10 for Large Language Model (LLM) Applications, OWASP Top 10 for Agentic Applications, National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF), and MITRE Adversarial Threat Landscape for AI Systems (ATLAS). ...

March 31, 2026 · 2 min · Rex Coleman

AI Security Research → OWASP, NIST, and MITRE Standards Mapping

AI Security Research → Standards Mapping Last updated: 2026-03-31 A cross-reference between original AI security research and the frameworks practitioners use to assess, govern, and defend AI systems. Why This Mapping Exists Security practitioners work within frameworks — Open Web Application Security Project (OWASP), National Institute of Standards and Technology (NIST), MITRE. Researchers often publish findings without connecting them to these frameworks, leaving practitioners to do the mapping themselves (or never find the research at all). ...

March 31, 2026 · 11 min · Rex Coleman
© 2026 Rex Coleman. Content under CC BY 4.0. Code under MIT. GitHub · LinkedIn · Email