The Allure of the Vibe. What is Vibe Coding?

The term "vibe coding" was coined by Dr. Andrej Karpathy to describe a fundamental shift in how software is created. It is called "vibe coding" because it captures the feeling of telling an AI "kind of, sort of what you want" and watching it transform those "vibes" into workable software. In this model:

  • The Developer as Head Chef: You no longer act as a typist but as an executive chef, orchestrating a "kitchen" of AI sous-chefs and line cooks who handle the implementation.
  • Results-Oriented Workflow: The process often prioritizes results over traditional deep understanding; a developer might simply "see stuff, say stuff, and run stuff" until the application works.
  • The FAAFO Framework: Vibe coding is driven by a desire for speed and experimentation — being Fast, Ambitious, Autonomous, having Fun, and maintaining Optionality.

However, there is a dark side to this speed. A recent investigation by RedAccess found that thousands of vibe-coded applications are currently exposing sensitive corporate and personal data on the open web. The problem? Developers are "accepting all" changes from AI without reading the diffs or verifying security.

The Danger: Results Over Understanding

The hallmark of vibe coding is often a focus on results over traditional understanding. As Andrej Karpathy admitted, when code grows beyond comprehension, he often "asks for random changes until bugs go away".

Vibe coding is described as a "slot machine with infinite payout but also infinite loss potential".

Where is Security in the Vibe?

The recent RedAccess research serves as a critical warning: We cannot afford to view software development solely through the lens of technical speed or business value.

When developers focus only on the "vibe" (the desired outcome) and the "business value" (how fast it gets to market), security often becomes an invisible casualty. If we treat coding like a "slot machine" with infinite payout, we must acknowledge it also has infinite loss potential.

The Technical Blind Spot

From a purely technical perspective, an AI might generate code that works but is fundamentally insecure. AI models are trained on vast amounts of data, including insecure code, and may "suggest using ingredients that don't exist" or ignore security protocols unless explicitly instructed otherwise. If a developer "Accepts All" without reading the diffs, they are effectively blindfolding themselves to the vulnerabilities being introduced.

The Business Blind Spot

From a business perspective, the pressure to FAAFO (Fast, Ambitious, Autonomous, Fun, Optionality) can lead to "reckless abandon". The desire for "frictionless" creation can sever the link between creation and consequences. When anyone can generate an app in a weekend, the traditional Shift Left Security safeguards of professional engineering are often bypassed in favor of a quick "vibe."

When security is "vibed" instead of engineered:

  • Hallucinated Protections: AI may suggest security ingredients that don't exist or techniques that make no sense. Check PocketOS incident
  • The "Swedish Chef" Effect: Without safeguards, a helpful AI can become a "menace to society," leaving a trail of unintentional destruction in the code base.
  • Exposure of Secrets: As RedAccess found, AI agents often inadvertently hardcode API keys, leave database ports open, or fail to implement proper authentication because they weren't explicitly told to do so.

The Solution: Implementing "Shift Left Security"

None
GenAI image

In traditional DevOps, we talk about "shifting left." In vibe coding, we must go further. Security must be an inherent part of the Prevent, Detect, and Correct loops that Kim and Yegge propose in their book Vibe Coding.

1. Prevent: Setting the "Golden Rules"

Your AI sous-chefs cannot read your mind; they need written rules.

  • AGENTS.md: Use a project-level file (like AGENTS.md) to document organizational security guidelines.
  • Zero-Trust Prompting: Explicitly instruct agents to never hardcode secrets and to always use environment variables.

2. Detect: The Second Opinion

Vibe coding doesn't mean "turning your brain off".

  • Multi-Model Verification: Use a second AI model to review the first model's code. A second model can often catch a hallucinated API or a security vulnerability the first one missed.
  • Continuous Auditing: In the "Outer Loop" (weeks to months), you must audit the "kitchen" to ensure the AI hasn't "torched your bridges" by introducing legacy vulnerabilities over time.

3. Correct: Automated Hardening

  • AI-Enabled CI/CD: Integrate AI into your delivery pipeline to analyze every change with the scrutiny of an expert human reviewer.
  • Small Changes: Foster a culture of making the smallest change possible to keep the codebase from "ballooning" into an unmanageable (and insecure) mess.

Conclusion: From Vibe to Vision

Vibe coding is here to stay, and it is the "only game in town" for those who want to remain competitive. But as the RedAccess research proves, productivity without protection is a liability.

To build production-grade software, the "Head Chef" (the developer) must move from just "feeling the vibe" to orchestrating a secure environment. We must treat security not as a hurdle to FAAFO (Fast, Ambitious, Autonomous, Fun, Optionality), but as the foundation that makes it possible.

The ultimate challenge for a security expert in the era of "vibe coding" is a paradox: How do we enable 10x velocity without succumbing to "Vibe inSecurity" — the dangerous assumption that if the code "feels" right and passes a functional test, it must be secure?

To stay ahead, we must transform from "gatekeepers" to "guardrail engineers." We cannot afford to follow the same unstructured path as the developer; we must build the Security Infrastructure that surrounds their vibe. As security experts, our goal is to ensure that "Fast, Ambitious, Autonomous, and Fun" (FAAFO) does not lead to Fatal Flaws. We must move beyond the "technical" (writing code) and the "business" (shipping features) to the Architectural. Success lies in moving from subjective manual review to objective automated governance with:

  1. The "Red Team" Inner Loop (Adversarial Prompting)
  2. Declarative Security Policy (The .security_rules File)
  3. Real-Time Behavioral Guardrails.

By implementing these strategies, we aren't just making the code secure; we are building an "Immune System" for the development process. We are allowing developers to "vibe" with the speed of light, knowing that the foundation they are building on is structurally incapable of certain classes of failure.

Vibe coding is the engine, but Security is the steering and the brakes. Without them, you aren't a developer — you're just a passenger in a high-speed crash.

Takeaways:

  • Verification is Mandatory: "Accept All" is a recipe for a data breach.
  • AI as a partner, not a pilot: You are the Executive Chef; you must taste the results early and often to ensure nothing unsafe leaves your kitchen.
  • Invest in rules: Spending time curating your prompt and rules files is the most important "housekeeping" a vibe coder can do. Including: - Explicit requirements: You only get the quality you ask for. If you don't explicitly require excellence in security through these files, the AI may "half-ass" the implementation to get it working faster. - Preventive control: These files are your most powerful preventive control in the developer loop, stopping vulnerabilities before they ever enter your codebase.

Example of agents.md This file is an example AI generated. It acts as the "Standard Work" or recipe book for every agent that enters your kitchen. It should be placed in the root directory of your repository.# Project Security & Quality Standards (AGENTS.md)

# Project Security & Quality Standards (AGENTS.md)
## Non-Negotiable Security Rules
* [cite_start]**No Hardcoded Secrets**: Never include API keys, passwords, or tokens in code[cite: 284].
* [cite_start]**Secrets Management**: Always use environment variables or a dedicated secrets manager[cite: 284].
* **Input Validation**: All data from external sources (API, UI, Database) must be sanitized and validated before use.
* **Least Privilege**: Configure all system calls and cloud permissions with the minimum necessary access.
* [cite_start]**Sandbox Compliance**: Do not attempt to access or modify files outside of the designated project directory or container[cite: 421, 423].
## Verification Protocols
* [cite_start]**Test Before Commit**: Every new feature must include corresponding unit tests[cite: 165].
* [cite_start]**No "Cardboard Muffins"**: Never hardcode values to force a test to pass[cite: 152].
* [cite_start]**Diff Reviews**: AI must explain any deleted lines in a code diff to prevent silent destruction[cite: 147].

References: