Secure Prompting Patterns for AI Code Generation

Written by the Rafter Team

The difference between secure and vulnerable AI-generated code is usually the prompt. When you tell an AI assistant "build me a task manager," it optimizes for the fastest path to working features—hardcoded secrets, no input validation, missing auth, disabled RLS. When you tell it "build me a task manager with per-user RLS policies, Zod validation on all inputs, and authentication on every route," you get dramatically different output. A 2025 Apiiro study found that AI coding assistants ship 10x more vulnerabilities at 4x the velocity. The prompts are the leverage point. This guide provides copy-paste patterns that make AI assistants produce secure code by default.
Introduction
AI coding assistants—Cursor, GitHub Copilot, Claude, ChatGPT, Lovable, Bolt.new—are statistical machines. They generate the most probable next token based on context. When the context includes security requirements, security appears in the output. When it doesn't, it doesn't.
This isn't a limitation to work around. It's a lever to pull. By structuring your prompts to include security constraints upfront, you shift the AI's probability distribution toward secure patterns. The output isn't perfect, but it's measurably better than unconstrained generation.
This guide covers:
- Why default prompts produce insecure code
- System prompts that enforce security by default
- Task-level prompt patterns for common operations
- Review prompts that catch vulnerabilities in existing code
- How to combine prompting with automated scanning for defense in depth
Why Default Prompts Fail
AI assistants are trained on massive codebases. Those codebases contain the full spectrum of code quality—from production-hardened systems to tutorial snippets and prototype code. When your prompt doesn't specify security requirements, the AI draws from the entire distribution.
The result is statistically average code. And the statistical average of code on the internet is insecure.
What "Build Me a Task Manager" Produces
A typical unconstrained prompt generates:
- Supabase client with hardcoded keys (the fastest path to "it works")
- Tables without RLS (RLS requires understanding the auth model)
- API routes without authentication (auth isn't part of the functional requirement)
- No input validation (validation doesn't affect happy-path behavior)
- Raw error messages (fastest way to debug during development)
- No security headers (not visible to the user)
Each of these is a rational optimization if the only goal is working functionality. The AI is doing exactly what you asked—building a task manager. You didn't ask it to build a secure task manager.
Pattern 1: Security-First System Prompts
A system prompt runs before every interaction. Configure it once, and every code generation includes security constraints.
The Base Security System Prompt
Use this as your AI assistant's system prompt or custom instructions:
When generating code, always follow these security requirements:
AUTHENTICATION:
- Every API route and Server Action must check authentication before processing
- Use the established auth library (NextAuth, Supabase Auth, Clerk) for session validation
- Admin operations must verify admin role, not just authentication
- Return 401 for unauthenticated requests, 403 for unauthorized ones
DATABASE SECURITY:
- Enable Row-Level Security on every Supabase table
- Write RLS policies that scope data to the authenticated user (auth.uid())
- Never use the service_role key in client-side code
- Use parameterized queries, never string interpolation
INPUT VALIDATION:
- Validate all user inputs with Zod schemas before processing
- Validate route parameters and query strings
- Restrict file uploads by type and size
- Escape HTML output to prevent XSS
SECRETS MANAGEMENT:
- Never hardcode API keys, passwords, or tokens
- Use environment variables for all credentials
- Only use NEXT_PUBLIC_ prefix for values safe to expose in the browser
- Ensure .env files are in .gitignore
ERROR HANDLING:
- Return generic error messages to clients
- Log detailed errors server-side only
- Never expose stack traces, database schemas, or internal paths
RATE LIMITING:
- Add rate limiting to authentication endpoints
- Add rate limiting to expensive operations (AI calls, file uploads, email sends)
Platform-Specific Additions
For Cursor, add to .cursorrules:
Security-first development. Every new file must follow the security
requirements in the system prompt. When modifying existing files, check
for and fix security gaps in the code you touch.
For Lovable or Bolt.new, prepend to every prompt:
Before generating code, confirm that you will:
1. Enable RLS on all new Supabase tables
2. Add auth checks to all new API endpoints
3. Use environment variables for all credentials
4. Validate all inputs with Zod
Pattern 2: Security-Constrained Task Prompts
When requesting specific features, include security constraints alongside functional requirements.
Database Schema Creation
Create a Supabase schema for a multi-tenant project management app.
Requirements:
- Tables: projects, tasks, team_members, comments
- Every table has RLS enabled
- RLS policies scope all operations to the user's organization
- Junction tables (team_members) also have RLS
- Use UUID primary keys with gen_random_uuid()
- Include created_at and updated_at timestamps
- Foreign keys reference auth.users(id) for user ownership
- Add indexes on foreign key columns
Do NOT:
- Create tables without RLS
- Use overly permissive policies (no USING (true))
- Create views without security_invoker = true
API Route Creation
Create a Next.js API route for updating a user's profile.
Requirements:
- Validate input with Zod: name (string, 1-100 chars), bio (string, max 500 chars)
- Check authentication with auth()
- Verify the authenticated user is updating their own profile
- Use parameterized database query
- Return generic error messages to the client
- Log detailed errors server-side
- Rate limit to 10 requests per minute per user
Do NOT:
- Skip the auth check
- Use string interpolation in the database query
- Return raw error messages or stack traces
- Expose other users' data
Authentication Flow
Implement email/password authentication with Supabase Auth.
Requirements:
- Sign up with email verification
- Sign in with rate limiting (5 attempts per 15 minutes)
- Password reset flow that doesn't reveal whether an email exists
- Session management with automatic refresh
- Protected route middleware that redirects unauthenticated users
- Sign out that invalidates the session server-side
Do NOT:
- Store passwords in your own database (use Supabase Auth)
- Return different responses for "email not found" vs "wrong password"
- Allow unlimited login attempts
- Store session tokens in localStorage (use httpOnly cookies)
File Upload
Add file upload to the project settings page.
Requirements:
- Accept only PNG, JPG, and WebP images
- Maximum file size: 5MB
- Validate file type on both client and server (don't trust Content-Type header alone)
- Store in Supabase Storage with RLS policy scoped to the project owner
- Generate a unique filename to prevent path traversal
- Return a signed URL, not a public URL
Do NOT:
- Accept any file type
- Use the original filename for storage
- Skip server-side validation
- Make the storage bucket public
Pattern 3: Adversarial Review Prompts
After generating code, use the AI to audit its own output. This catches a significant portion of the vulnerabilities that the initial generation missed.
The Security Review Prompt
Review the code you just generated for security vulnerabilities.
Check for and fix:
1. AUTHENTICATION GAPS
- API routes or Server Actions missing auth checks
- Admin operations that don't verify admin role
- Data access without ownership verification
2. INJECTION VULNERABILITIES
- String interpolation in database queries
- Unsanitized HTML output (XSS)
- User input in file paths (path traversal)
- User input in shell commands (command injection)
3. DATA EXPOSURE
- Hardcoded secrets or API keys
- Service role keys in client code
- Overly permissive database queries (SELECT *)
- Error messages leaking internal details
4. MISSING CONTROLS
- Tables without RLS policies
- Endpoints without rate limiting
- Missing CORS configuration
- Missing security headers
For each issue found, show the vulnerable code and the fix.
The "Attack This Code" Prompt
You are a security researcher performing a penetration test.
Examine the code above and identify:
1. How would you access another user's data?
2. How would you bypass authentication?
3. How would you inject malicious input?
4. How would you exfiltrate secrets?
5. How would you cause a denial of service?
For each attack, show the exact request or input you would use,
then provide the code fix that prevents it.
The Dependency Audit Prompt
Review the package.json and identify:
1. Packages with known security vulnerabilities
2. Packages that are unmaintained (no updates in 12+ months)
3. Packages that are unnecessarily included (unused imports)
4. Packages that could be replaced with built-in Node.js or browser APIs
For each finding, explain the risk and suggest a replacement or removal.
Pattern 4: Incremental Security Hardening
For existing vibe-coded projects, use these prompts to harden code incrementally without rewriting everything.
RLS Audit and Fix
Audit the Supabase schema in this project.
For every table in the public schema:
1. Check if RLS is enabled
2. Review existing policies for overly permissive rules
3. Identify missing policies for SELECT, INSERT, UPDATE, DELETE
4. Check for junction tables and views that bypass RLS
Output:
- A SQL migration that enables RLS on all tables
- Policies scoped to auth.uid() for user-owned data
- Policies scoped to organization membership for shared data
- A test script that verifies policies work correctly
Auth Middleware Retrofit
This project has API routes and Server Actions without authentication.
1. List every API route and Server Action in the project
2. Identify which ones are missing auth checks
3. Add authentication middleware using the existing auth library
4. Add authorization checks where data ownership matters
5. Keep public routes (marketing pages, public API) unchanged
Show me a complete diff for each file changed.
Pattern 5: Composing Patterns for Full Coverage
The patterns above work best in combination. Here's the recommended workflow:
Step 1: Set the System Prompt
Configure Pattern 1 as your AI assistant's persistent instructions. This establishes the security baseline for all code generation.
Step 2: Use Constrained Task Prompts
When requesting specific features, use Pattern 2 to include security requirements alongside functional requirements.
Step 3: Review with Adversarial Prompts
After generating code, use Pattern 3 to audit the output. Run both the security review and the "attack this code" prompts.
Step 4: Harden Incrementally
For existing code, use Pattern 4 to retrofit security controls without full rewrites.
Step 5: Scan with Rafter
Prompting improves code quality but doesn't guarantee security. Rafter catches the vulnerabilities that survive all four prompt patterns—because AI assistants, even well-prompted ones, have blind spots.
What Prompting Can and Can't Do
Secure prompting measurably reduces vulnerabilities in AI-generated code. But it has limits.
What prompting does well:
- Ensures auth checks are included in generated code
- Produces RLS policies alongside table definitions
- Generates input validation schemas
- Keeps secrets out of source files
What prompting can't catch:
- Logic errors in authorization (policy says "admin" but the role is stored differently)
- Dependency vulnerabilities (the AI doesn't know about CVEs published after training)
- Framework-level bugs (CVE-2025-29927 existed regardless of prompt quality)
- Subtle injection patterns in complex query builders
This is why prompting is one layer in a defense-in-depth strategy, not a replacement for automated scanning and manual review.
Defense in depth for vibe-coded apps: secure prompts produce better code → adversarial review catches generation mistakes → Rafter scanning catches what review misses → continuous monitoring catches runtime issues.
Conclusion
AI coding assistants generate the code their prompts describe. When the prompts include security requirements, the output includes security controls. When they don't, it doesn't.
Your next steps:
- Copy the system prompt from Pattern 1 into your AI assistant's custom instructions—it takes one minute and improves every future generation
- Use constrained task prompts (Pattern 2) for your next feature—include security requirements alongside functional requirements
- Run the adversarial review (Pattern 3) on your most recent AI-generated code—you'll find vulnerabilities
- Scan with Rafter to catch what prompting misses—AI assistants have blind spots that automated scanning fills
The prompts you write are the first line of defense. Make them count.
Related Resources
- Vibe Coding Security: The Complete Guide
- Supabase RLS for Vibe-Coded Apps: The Security You're Missing
- Next.js Security Checklist for AI-Generated Projects
- Vibe Coding Is Great — Until It Isn't: Why Security Matters
- How to Thoroughly Test Your Vibe-Coded App
- Why You Need Independent Security Audits for Vibe-Coded Apps
- Securing AI-Generated Code: Best Practices