There’s a gap between ML research and production security. Researchers publish attacks that don’t account for real-world constraints. Security teams deploy models without adversarial evaluation. Nobody builds the bridge.

That’s what I’m building.

I spent 15 years in cybersecurity — data analytics and enterprise sales at FireEye and Mandiant, then founding two startups. Now I’m completing my MS in Computer Science at Georgia Tech (Machine Learning specialization) and shipping open-source tools that close the gap between AI capability and AI security. Current focus:

AI Security Architecture — adversarial evaluation of ML systems across 4 paradigms (supervised, reinforcement, unsupervised, optimization). Building RL agent attack frameworks, model behavioral fingerprinting, and adversarial training optimization for LLM defenses.

ML Systems GovernancegovML, an open-source framework (39 templates, 4 profiles, 8+ generators) that enforces reproducibility and decision traceability across ML projects. Contract-driven development with machine-checkable provenance.

Adversarial Control Analysis — a methodology for designing ML systems where the inputs that drive decisions are outside adversary influence. Applied across 6 security domains with consistent results. The principle: classify inputs by who controls them, and the defense architecture designs itself.

Background

Data Analyst (FireEye 2011-15) → Sales Executive (FireEye/Mandiant 2015-19, Fidelis 2019-20) → Founder/CEO (Watchtower, Singularity 2020-present) → MS Computer Science at Georgia Tech (Machine Learning specialization) → building at the intersection of AI and security.

Credentials

CFA Charterholder · CISSP · BSc Biochemistry & Molecular Biology · MS Computer Science (in progress, Georgia Tech OMSCS — Machine Learning specialization)

Connect

GitHub · LinkedIn · RSS