Environment-Specific Bugs and the ‘Works on My Machine’ Threat
A recurring theme in our application security assessments is the gap between a developer's local environment and production. During an engagement reviewing a code intelligence platform's RPC layer, we encountered a defect that exemplifies this risk. An internal service constructed git commands by assembling an argument array: ["git", "log", "--pretty=%H %P", ...]. This array was passed to an RPC service that prepended the binary name, producing the effective invocation git git log --pretty='%H %P'. The command failed on every machine in the fleet with the error: git: 'git' is not a git command. Did you mean 'init'? Every machine, that is, except the developer's.
The developer had, at some earlier point, grown accustomed to accidentally typing git git status and similar doubled prefixes. Rather than correcting the habit, they had added a git alias to their global configuration: [alias] git = !git. This alias silently consumed the duplicate git token, causing the malformed command to succeed locally. The developer's tests passed. Their manual verification passed. The defect was only discovered when the service was deployed to a staging environment where no such alias existed.
The Security Implications of Environment Divergence
From a security perspective, this class of defect is more dangerous than it first appears. The immediate consequence was a service failure — an availability issue. But the underlying pattern, where a developer's local configuration masks a defect that behaves differently in production, applies equally to authentication bypasses, privilege escalation, and data exposure. If a developer's local environment has a permissive CORS configuration, a relaxed TLS verification setting, or a custom certificate authority in their trust store, security-critical code paths may appear to function correctly during development while failing or behaving unexpectedly in production. The inverse is equally concerning: production-only configurations that are never exercised during development create untested code paths.
The git alias case is instructive because the mitigation that created the vulnerability was itself a reasonable developer convenience. There was no malicious intent, no negligence in the usual sense. The developer solved a personal workflow friction in a way that happened to create a blind spot. This is why we treat environment parity as a security control rather than a convenience. Reproducible build and test environments — whether through containerisation, Nix, or strictly managed CI images — eliminate an entire category of defects that are invisible to the person who introduced them.
What to Look For During Assessments
When conducting security assessments, we specifically examine whether the development, testing, and production environments share the same foundational tooling configuration. Key areas include: shell aliases and git configuration that may alter command behavior; language runtime versions and compiler flags; environment variables that toggle features or disable security controls (common examples include NODE_TLS_REJECT_UNAUTHORIZED=0 and PYTHONDONTWRITEBYTECODE); and operating system-level differences such as filesystem case sensitivity between macOS and Linux. Each divergence point is a potential location where a defect — security-relevant or otherwise — can hide.
Our standard recommendation is to enforce that all automated tests execute in an environment that matches production as closely as is practical, and that developer onboarding documentation explicitly warns against local customisations that alter the behavior of tools invoked by the application under development. The cost of maintaining environment parity is a fraction of the cost of diagnosing a production incident caused by a configuration that exists on exactly one developer's laptop.
← Back to Insights