Vibe Coding Security: The Complete Guide to Securing AI-Generated Apps

Written by the Rafter Team

Vibe-coded applications—apps built primarily through AI coding assistants like Cursor, Lovable, Bolt.new, v0, and Replit—ship faster than anything before them. They also ship more vulnerabilities. Research from Veracode found that 45% of AI-generated code contains security flaws, while CodeRabbit's analysis of 470 pull requests showed AI-written code produces 1.7x more issues than human-written code, including 2.74x more XSS vulnerabilities. The speed is real. So are the risks. This guide covers the full attack surface of vibe-coded apps and gives you concrete steps to secure them.
By June 2025, AI-generated code was introducing over 10,000 new security findings per month—a 10x spike in just six months. If you're shipping vibe-coded apps without security scanning, you're shipping vulnerabilities.
Introduction
Vibe coding has collapsed the distance between idea and deployed application. A founder can describe a SaaS product in natural language and have a working prototype in hours. But the same tools that eliminate boilerplate also eliminate the friction that used to force developers to think about security.
This guide covers everything you need to secure a vibe-coded application:
- The attack surface unique to AI-generated code
- Platform-specific security gaps across Lovable, Bolt.new, v0, Replit, and others
- Database security with Supabase RLS
- Framework hardening for Next.js and React
- Prompt engineering patterns that produce secure code
- Testing strategies built for AI-generated codebases
- Automated scanning with Rafter
Whether you're a solo founder shipping your first product or a developer evaluating AI-generated code for production, this is your security reference.
The Attack Surface: What Makes Vibe-Coded Apps Vulnerable
AI coding assistants optimize for functionality, not security. They produce code that works on the happy path but fails under adversarial conditions. The vulnerabilities fall into predictable categories.
Hardcoded Secrets
AI assistants routinely generate code with API keys, database passwords, and service tokens embedded directly in source files. The assistant's goal is working code—and hardcoding a key is the fastest path to "it works." According to GitGuardian's 2024 State of Secrets Sprawl report, over 10 million secrets were detected on public GitHub repositories in 2023 alone.
// ✗ Vulnerable: AI-generated code often does this
const supabase = createClient(
'https://abc123.supabase.co',
'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...'
);
// ✓ Secure: Use environment variables
const supabase = createClient(
process.env.NEXT_PUBLIC_SUPABASE_URL,
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY
);
Missing Authentication and Authorization
AI-generated API routes frequently lack authentication middleware. The assistant builds the endpoint, connects it to the database, and returns data—without checking who's asking. Admin panels, user data endpoints, and mutation routes ship wide open.
No Input Validation
AI assistants skip validation because it's not part of the prompt. User inputs flow directly into database queries, file operations, and API calls without sanitization. This creates injection vulnerabilities—SQL injection, XSS, command injection—that are trivially exploitable.
Insecure Dependencies
When an AI assistant needs functionality, it reaches for the first package that solves the problem. It doesn't check maintenance status, known CVEs, or download trends. Synopsys OSSRA 2025 found that 84% of open-source projects contain vulnerable dependencies.
Missing Row-Level Security
Supabase is the default backend for most vibe-coded apps. Tables are created with RLS disabled by default. AI assistants rarely enable it because RLS policies require understanding the authorization model—something a prompt like "build me a task manager" doesn't specify. A 2025 study by byteiota found 170+ Lovable-built apps with exposed databases due to missing RLS.
No Rate Limiting
AI-generated APIs ship without rate limiting. Every endpoint becomes a potential denial-of-service target or abuse vector. Attackers can brute-force authentication, scrape data, or burn through your API credits.
Platform Security Comparison
Not all vibe coding platforms expose the same risks. The table below compares security characteristics across major platforms based on Rafter's independent security audits.
| Security Feature | Lovable | Bolt.new | v0 | Replit | Base44 | Emergent |
|---|---|---|---|---|---|---|
| Default RLS on new tables | No | N/A | N/A | N/A | No | No |
| Auth scaffolding included | Partial | No | No | Partial | No | No |
| Secret management built-in | Env vars | Env vars | Env vars | Secrets tab | Limited | Limited |
| Dependency scanning | No | No | No | Basic | No | No |
| HTTPS by default | Yes | Yes | Yes | Yes | Yes | Yes |
| Input validation scaffolding | No | No | No | No | No | No |
| Security warnings in UI | Minimal | No | No | No | No | No |
Every platform above defaults to speed over security. None of them will warn you when your AI-generated code ships without authentication, input validation, or RLS policies. That's why independent scanning matters.
The detailed findings for each platform are available in our audit series:
- Security Audit: Lovable
- Security Audit: Bolt.new
- Security Audit: v0
- Security Audit: Replit
- Security Audit: Base44
- Security Audit: Emergent
The Vibe Coding Security Checklist
Use this checklist before shipping any vibe-coded application. Each item addresses a vulnerability category that AI assistants consistently miss.
Secrets and Credentials
- No API keys, tokens, or passwords in source code
- All secrets stored in environment variables or a secrets manager
-
.envfiles listed in.gitignore - Service role keys never exposed to the client
- Git history checked for accidentally committed secrets
Authentication and Authorization
- Every API route requires authentication
- Admin routes check for admin role, not just authentication
- JWT tokens validated on every request
- Session expiration configured
- Password reset flow doesn't leak user existence
Input Validation
- All user inputs validated with a schema library (Zod, Joi, or similar)
- Database queries use parameterized statements, never string interpolation
- File uploads restricted by type and size
- HTML output escaped to prevent XSS
- URL parameters validated before use
Database Security
- RLS enabled on every Supabase table
- RLS policies tested with different user roles
- Service role key used only in server-side code
- No
SELECT *queries exposing unnecessary columns - Database migrations reviewed for security implications
Dependencies
-
npm auditor equivalent run before deployment - No packages with known critical CVEs
- Unused dependencies removed
- Lock file committed to version control
Rate Limiting and Abuse Prevention
- Rate limiting on authentication endpoints
- Rate limiting on API endpoints
- CORS configured to allow only your domain
- CSRF protection enabled
Monitoring
- Error tracking configured (Sentry or similar)
- Authentication failures logged
- Unusual API usage patterns monitored
Database Security: Why RLS Is Non-Negotiable
Supabase powers the majority of vibe-coded applications, and Row-Level Security is the single most important security control you can implement. Without RLS, any authenticated user can read, modify, or delete any row in any table—your entire database is one API call away from exposure.
The Problem
When you create a table in Supabase, RLS is disabled by default. The AI assistant creates your schema, inserts sample data, and moves on. Your anon key—which is embedded in your client-side code—now has unrestricted access to every row.
-- ✗ Vulnerable: Table without RLS
CREATE TABLE todos (
id UUID DEFAULT gen_random_uuid() PRIMARY KEY,
user_id UUID REFERENCES auth.users(id),
title TEXT,
completed BOOLEAN DEFAULT false
);
-- Anyone with the anon key can read/write ALL todos
-- ✓ Secure: Enable RLS and add policies
ALTER TABLE todos ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Users can view their own todos"
ON todos FOR SELECT
USING (auth.uid() = user_id);
CREATE POLICY "Users can insert their own todos"
ON todos FOR INSERT
WITH CHECK (auth.uid() = user_id);
CREATE POLICY "Users can update their own todos"
ON todos FOR UPDATE
USING (auth.uid() = user_id);
CREATE POLICY "Users can delete their own todos"
ON todos FOR DELETE
USING (auth.uid() = user_id);
Common RLS Mistakes in Vibe-Coded Apps
Using the service role key on the client. The service_role key bypasses all RLS policies. AI assistants sometimes use it in client-side code because it "just works" without needing policies. This gives every user god-mode access to your database.
Overly permissive SELECT policies. A policy like USING (true) on SELECT makes every row readable by every user. AI assistants generate this when they can't figure out the correct authorization logic.
Forgetting RLS on junction tables. The AI creates RLS on your main tables but forgets the junction tables used for many-to-many relationships. Attackers enumerate relationships through the unprotected junction table.
Views bypassing RLS. Views created with the default postgres user use security definer mode, which bypasses RLS entirely. If your AI assistant creates views, verify they use security invoker.
For a deep dive into RLS implementation patterns, see Supabase RLS for Vibe-Coded Apps: The Security You're Missing.
Framework Security: Next.js and React Hardening
Next.js is the default framework for most vibe coding platforms. It's powerful, but AI-generated Next.js code routinely ships with critical security gaps.
The Middleware Bypass (CVE-2025-29927)
In March 2025, Vercel disclosed CVE-2025-29927—a critical vulnerability (CVSS 9.1) that allowed attackers to bypass all Next.js middleware by adding a single HTTP header: x-middleware-subrequest. This affected versions 11.1.4 through 15.2.2—years of deployed applications.
If your vibe-coded app uses middleware for authentication, authorization, or CSP headers, and you haven't updated Next.js, your security controls are bypassed with one header.
The React Server Components RCE (CVE-2025-55182)
In December 2025, a critical RCE vulnerability (CVSS 10.0) was found in React Server Components and Next.js. An unauthenticated attacker could execute arbitrary code on your server with a crafted HTTP request.
What AI Assistants Get Wrong in Next.js
Server Actions without authorization. AI assistants create Server Actions that mutate data without checking who's calling them. Every Server Action is a public API endpoint—treat it like one.
// ✗ Vulnerable: No auth check in Server Action
'use server'
export async function deleteUser(userId: string) {
await db.delete(users).where(eq(users.id, userId));
}
// ✓ Secure: Verify authorization
'use server'
export async function deleteUser(userId: string) {
const session = await auth();
if (!session?.user || session.user.role !== 'admin') {
throw new Error('Unauthorized');
}
await db.delete(users).where(eq(users.id, userId));
}
Client-side secrets in Server Components. AI assistants sometimes import environment variables without the NEXT_PUBLIC_ prefix into client components, or vice versa—exposing server-only secrets to the browser.
Missing security headers. AI-generated Next.js apps rarely include security headers. Add them in next.config.js or middleware.
For the full framework security checklist, see Next.js Security Checklist for AI-Generated Projects.
Secure Prompting: Getting AI to Write Secure Code
The quality of AI-generated code depends heavily on how you prompt the assistant. Generic prompts produce generic (insecure) code. Security-aware prompts produce dramatically better output.
The Problem with Default Prompts
When you tell an AI assistant "build me a task manager with Supabase," it optimizes for the fastest path to a working feature. Security isn't part of the objective function. The result: working code with hardcoded keys, no RLS, no input validation, and no auth checks.
Prompting Patterns That Produce Secure Code
Pattern 1: Security requirements upfront. Include security requirements in your initial prompt, not as an afterthought.
Build a task manager with Supabase. Requirements:
- Enable RLS on all tables with per-user policies
- Validate all inputs with Zod schemas
- Use environment variables for all credentials
- Add authentication middleware to all API routes
- Implement rate limiting on mutation endpoints
Pattern 2: Threat-aware iteration. After the AI generates code, prompt it to attack its own output.
Review the code you just generated. Identify:
1. Any hardcoded secrets or credentials
2. API routes missing authentication
3. Database queries vulnerable to injection
4. Missing input validation
5. Tables without RLS policies
Fix all issues found.
Pattern 3: Security-first system prompts. Configure your AI assistant's system prompt to enforce security by default.
For complete prompt templates and patterns, see Secure Prompting Patterns for AI Code Generation.
Testing and Auditing AI-Generated Code
Functional testing—verifying that features work—doesn't catch security vulnerabilities. A login form can work perfectly while being vulnerable to SQL injection. Vibe-coded apps need a testing strategy that explicitly targets the vulnerability patterns AI assistants introduce.
The Four-Layer Testing Approach
Layer 1: Automated security scanning (5 minutes). Run Rafter against your repository. It catches hardcoded secrets, injection vulnerabilities, missing authentication, insecure dependencies, and RLS gaps. This is the highest-ROI security activity you can do.
Layer 2: Manual auth testing (30 minutes). Try accessing every API route without authentication. Try accessing other users' data with a valid session. Try calling admin endpoints with a regular user token.
Layer 3: Input fuzzing (1 hour). Submit unexpected inputs to every form and API endpoint. Empty strings, SQL injection payloads, XSS payloads, extremely long strings, special characters, and negative numbers.
Layer 4: Dependency audit (10 minutes). Run npm audit and review the results. Update or replace any packages with known critical vulnerabilities.
What to Test First
Prioritize testing based on impact:
- Authentication and authorization—can users access data they shouldn't?
- Data mutations—can users modify or delete data they don't own?
- Secrets exposure—are any credentials visible in client-side code or network requests?
- Input handling—do forms and APIs reject malicious input?
For a comprehensive testing guide, see How to Thoroughly Test Your Vibe-Coded App.
How Rafter Secures Vibe-Coded Apps
Rafter is purpose-built for the security challenges of AI-generated code. It understands the patterns AI assistants produce and the vulnerabilities they introduce.
What Rafter Scans For
- Hardcoded secrets: API keys, database passwords, service tokens, and JWTs embedded in source code
- Missing authentication: API routes and Server Actions without auth middleware
- Injection vulnerabilities: SQL injection, XSS, command injection, and path traversal
- Insecure dependencies: Packages with known CVEs and unmaintained libraries
- RLS gaps: Supabase tables without Row-Level Security policies
- Overly permissive configurations: CORS wildcards, disabled CSRF protection, and open admin routes
How It Works
- Connect your GitHub repository
- Rafter scans your codebase in seconds
- Review findings organized by severity—Critical, Warning, Improvement
- Copy AI-ready fix prompts directly into your coding assistant
Every finding includes the exact file and line number, an explanation of the risk, and a prompt you can paste into your AI assistant to fix the issue.
Continuous Protection
Integrate Rafter into your CI/CD pipeline to scan every commit automatically. Block merges that introduce critical vulnerabilities. Get notified when new CVEs affect your dependencies.
Run a free scan on your vibe-coded app →
Conclusion
Vibe coding isn't going away—it's accelerating. The tools will keep getting better, the outputs will keep getting more sophisticated, and more production applications will be built this way. The security gap between what AI assistants generate and what production requires won't close on its own.
The data is clear: 45% of AI-generated code contains vulnerabilities, AI-written code produces 1.7x more security issues than human-written code, and 81% of companies knowingly ship vulnerable code under AI-accelerated workflows. These aren't edge cases. They're the baseline.
Your action plan:
- Run a security scan now—connect your repo to Rafter and fix critical findings before they're exploited
- Enable RLS on every Supabase table—this single change eliminates the most common vibe-coded vulnerability
- Update Next.js—if you're running versions before 15.2.3, you're exposed to the middleware bypass (CVE-2025-29927)
- Add security requirements to your prompts—tell the AI what you need before it generates code, not after
- Audit dependencies weekly—run
npm auditand act on critical findings immediately
The goal isn't to slow down. It's to ship with confidence instead of hope.
Related Resources
- Vibe Coding Is Great — Until It Isn't: Why Security Matters
- Supabase RLS for Vibe-Coded Apps: The Security You're Missing
- Next.js Security Checklist for AI-Generated Projects
- Secure Prompting Patterns for AI Code Generation
- How to Thoroughly Test Your Vibe-Coded App
- Why You Need Independent Security Audits for Vibe-Coded Apps
- Security Audit: Lovable
- Automated Security Scanning for Modern Applications