
Builders are shipping AI systems with attack surfaces they don’t understand — poisoned models, hijacked agents, backdoored pipelines. Every layer of the stack needs defending, and most teams don’t know where to start.
Both sides need architecture-level solutions. I’m building them.
I run Singularity Cybersecurity — an AI security research lab — we run continuous experiments, publish the findings, and ship what works as open-source tools.
What I work on
AI Security Architecture — adversarial evaluation of ML systems from supervised classifiers to RL agents to unsupervised anomaly detectors to LLM defenses. Building RL agent attack frameworks, model behavioral fingerprinting, and adversarial training optimization for LLM defenses.
Adversarial Control Analysis — a methodology for designing ML systems where the inputs that drive decisions are outside adversary influence. Applied across 6 security domains. The principle: classify inputs by who controls them, and the defense architecture designs itself.
govML — governance framework (50+ templates, 10 profiles, 20+ generators) that enforces reproducibility and decision traceability across ML projects.
Background
Data Analyst (FireEye 2011-15) → Sales Executive (FireEye/Mandiant 2015-19, Fidelis 2019-20) → Founder/CEO (Singularity Cybersecurity 2021-present) → MS Computer Science at Georgia Tech (Machine Learning specialization).
CFA Charterholder · CISSP · BSc Biochemistry & Molecular Biology.