Beyond Prompt Injection: RL Attacks on AI Agent Decision-Making
What happens when you attack an AI agent’s learning process instead of its prompts? I built two custom Gymnasium environments (access control decisions, tool selection), trained 40 RL agents (Q-Learning, DQN, Double DQN, PPO across 5 seeds each), then systematically attacked them with 4 attack classes: reward poisoning, observation perturbation, policy extraction, and behavioral backdoors. 150 attack experiments total. The headline finding: observation perturbation degrades agent performance 20-50x more effectively than reward poisoning. And prompt-injection defenses from my earlier agent red-teaming work are 0% effective against RL-specific attacks — they target completely different surfaces. ...