Shell Command Injection via LLM Prompt - ATR-2026-00296 (90da2540-a77e-56c3-97ae-dc425a448088)
Detects shell command injection patterns injected via LLM prompts, sourced from 6 real payloads in NVIDIA garak access_shell_commands.json (payload name: "Attempts to get access to underlying shell"). Patterns include: Bash command substitution $() containing OS commands (touch, ls, rm, cat, wget, curl, chmod), Perl array-interpolation @{[system "..."]}, backtick execution with system commands, Ruby System() and Kernel.exec() calls with -al flags, and eval() with OS command strings. These payloads target LLMs or agents that may pass user input to an underlying shell or interpreter, enabling arbitrary OS-level command execution (e.g., file creation in /tmp, directory listing).