Ted Nyman Ted Nyman

Fundamentals of web security for vibe coding

If you're going to vibe code, you need to understand basic web security
Fundamentals of web security for vibe coding

Vibe coding?

Vibe coding is a strange, new, important way of building software where developers use AI coding assistants like Cursor, Windsurf, GitHub Copilot, etc to generate code rapidly, often without scrutinizing (or even looking at) every line. Given the speed of development this is enabling, this mode of development (in some form or another; it will certainly get more sophisticated) is likely here to stay.

I’m bullish on a full-on embrace of AI-assisted coding, but I’m concerned about the security implications when people start shipping to production without understanding even the basics of web security.

AI coding lets you think less about code—it doesn’t mean you avoid the one absolute obligation you have to your users: security.

If you’re a long-experienced developer now using AI, you probably already know this stuff, and so you’ll mostly want to make sure your AI assistant is following best practices. Enjoy the productivity boosts and flourish!

But if you’re newer to web development, and plenty are, you need to understand the basics of web security.

If your app stores any data, you cannot ship to real users without knowing this stuff. I’m serious. It’s not optional. You’re going to seriously mess up.

The good news is the hard parts of web security are mostly solved by:

  1. Using established libraries and frameworks
  2. Following a fairly small number of best practices

AI assistants will usually write code that is secure, but they sometimes need guidance to do so. If you’re subsequently changing code yourself, and you don’t know what you’re doing: well, stop.

At least stop shipping to production. You’re not ready for real people to use your app, simply put. At least not yet.

You don’t need to be a security expert to build apps (virtually no developers are) but you do need to know what to ask for and what to look for. I’m going to assume no prior knowledge of web security here, so the examples will be familiar to experienced developers Some of the conversation examples may seem silly, but they’re there to help you understand the basics.

1. Never Store Sensitive Data in Client-Side Code

This is the number one problem I’m seeing in vibe coded apps, and if it keeps happening, things are going to get bad. It’s happening with things like OpenAI API keys (relatively tame) to full on database credentials (nightmare-ish for your users and you).

Why It Matters

Frontend code is accessible to anyone who visits your app. All it takes is opening the browser’s developer tools, and your secrets are exposed. This includes API keys, access tokens, and other credentials that could give attackers access to your systems and your users’ data.

How to Handle It Right

Tell your AI assistant to:

Conversation with Your AI Assistant:

YOU: "I need to authenticate with the payment API"
AI: "I'll set that up for you."
YOU: "Ensure the API key isn't stored in the frontend and that no secrets are exposed in frontend code."
AI: "Absolutely. I'll create a server endpoint to handle the payment API calls securely, keeping the keys on the server side only."

Bad Example (Next.js)

// 🚫 DON'T DO THIS in your app
// pages/index.js
import { useEffect, useState } from 'react';

// This runs at build time and embeds these values in the client-side bundle
export async function getStaticProps() {
  return {
    props: {
      // DANGER: These will be visible in the client-side JavaScript bundle
      apiKeys: {
        stripe: process.env.STRIPE_SECRET_KEY,
        aws: process.env.AWS_SECRET_ACCESS_KEY,
        sendgrid: process.env.SENDGRID_API_KEY
      },
      dbConfig: {
        host: process.env.DB_HOST,
        password: process.env.DB_PASSWORD
      }
    }
  };
}

export default function Home({ apiKeys, dbConfig }) {
  const [paymentStatus, setPaymentStatus] = useState('');
  
  // These sensitive values are now in the client-side bundle
  // Easily visible in the browser's dev tools/network tab
  console.log("API Keys available:", apiKeys);
  console.log("DB Config:", dbConfig);
  
  async function processPayment() {
    // Using the leaked API key in client-side code
    const response = await fetch('https://api.stripe.com/v1/charges', {
      method: 'POST',
      headers: {
        'Authorization': `Bearer ${apiKeys.stripe}`, // DANGER!
        'Content-Type': 'application/json'
      },
      body: JSON.stringify({
        amount: 2000,
        currency: 'usd',
        source: 'tok_visa'
      })
    });
    
    setPaymentStatus('Payment processed!');
  }

If any of your code looks like this, stop immediately.

2. Implement Proper Authentication and Session Management

Why It Matters

Poor authentication can lead to account takeovers, identity theft, and unauthorized access to user data. When you’re vibe coding, you might miss crucial security steps.

How to Handle It Right

USE ESTABLISHED LIBRARIES! This is absolutely critical. Tell your AI to:

Does this seem like a lot? It is. Even experienced programmers mess this up. That’s why the library does it. Make sure your AI assistant knows this.

Conversation with Your AI Assistant:

YOU: "Let's add user login to our app. Use established libraries."
AI: "I'll set that up. Would you like me to use Flask-Login for this?"
YOU: "Yes, and make sure we're using secure sessions and proper password hashing and anything else needed to be secure."
AI: "Absolutely. I'll implement Flask-Login with proper session security, use bcrypt for password hashing, and ensure all cookies have the HttpOnly flag set."

Good Example (Python with Flask)

# ✅ Example using established libraries
import os
from flask import Flask
from flask_login import LoginManager, UserMixin
from werkzeug.security import generate_password_hash, check_password_hash

app = Flask(__name__)
app.secret_key = os.environ.get('SECRET_KEY')
app.config.update(
    SESSION_COOKIE_SECURE=True,
    SESSION_COOKIE_HTTPONLY=True,
    SESSION_COOKIE_SAMESITE='Lax',
)

login_manager = LoginManager()
login_manager.init_app(app)
# ... rest of authentication setup

3. Protect Against SQL Injection

SQL injection attacks are one of the oldest and still most dangerous vulnerabilities. Easy way to avoid this: don’t write SQL queries by hand, use ORMs or query builders. I’ll say again: use established libraries.

Don’t know what an ORM is? Your AI assistant does. If you see vibe-coded raw SQL, you’re doing it wrong.

Why It Matters

SQL injection can allow attackers to access, modify, or delete data in your database. In worst cases, attackers might gain control of your entire system.

How to Handle It Right

USE ESTABLISHED LIBRARIES! Tell your AI to:

Conversation with Your AI Assistant:

YOU: "We need to fetch users based on their email domain"
AI: "I'll add that functionality."
YOU: "Make sure we're protected against SQL injection"
AI: "I'll use SQLAlchemy's ORM to handle this safely. If we need a more complex query, I'll use parameterized queries instead of string concatenation."

Bad Example (Python)

# 🚫 DON'T DO THIS
def get_user(user_id):
    conn = connect_to_db()
    cursor = conn.cursor()
    
    # DANGER: Direct string interpolation allows attackers to perform SQL injection
    query = f"SELECT * FROM users WHERE id = {user_id}"
    
    cursor.execute(query)
    return cursor.fetchone()

Good Example (Python with SQLAlchemy)

# ✅ DO THIS instead
from sqlalchemy.orm import Session
from sqlalchemy import text

# Using ORM
def get_user_orm(user_id, db_session):
    return db_session.query(User).filter(User.id == user_id).first()

# If you need raw SQL, still use parameterized queries
def get_user_raw(user_id, db_session):
    return db_session.execute(
        text("SELECT * FROM users WHERE id = :user_id"),
        {"user_id": user_id}
    ).fetchone()

4. Implement Cross-Site Scripting (XSS) Protection

Cross-Site Scripting (XSS) vulnerabilities are particularly dangerous and can be introduced when vibe coding UI components.

Why It Matters

XSS allows attackers to inject malicious scripts into web pages viewed by other users. These scripts can steal cookies, session tokens, or other sensitive information. It works by a user inserting malicious code into the HTML output of a web page.

How to Handle It Right

USE ESTABLISHED LIBRARIES! Tell your AI to:

Conversation with Your AI Assistant:

YOU: "We need to display user comments with formatting like bold and links"
AI: "I'll add that to the user profile page."
YOU: "Make sure we're protected against XSS attacks"
AI: "I'll use a library like DOMPurify to sanitize the HTML content and only allow a restricted set of safe HTML tags and attributes."

Good Example (Next.js with DOMPurify)

// ✅ Using established sanitization libraries in Next.js
import DOMPurify from 'isomorphic-dompurify';
import { useState, useEffect } from 'react';

export default function Comment({ commentId }) {
  const [comment, setComment] = useState(null);
  const [sanitizedContent, setSanitizedContent] = useState('');
  
  useEffect(() => {
    // Fetch comment data
    const fetchComment = async () => {
      try {
        const response = await fetch(`/api/comments/${commentId}`);
        const data = await response.json();
        setComment(data);
        
        // Sanitize the HTML content
        const clean = DOMPurify.sanitize(data.content, {
          ALLOWED_TAGS: ['p', 'b', 'i', 'em', 'strong', 'a', 'br'],
          ALLOWED_ATTR: ['href', 'title', 'target'],
          FORBID_ATTR: ['style', 'onerror', 'onclick'],
          FORBID_TAGS: ['script', 'iframe', 'object'],
        });
        
        setSanitizedContent(clean);
      } catch (error) {
        console.error('Failed to fetch comment', error);
      }
    };
    
    if (commentId) {
      fetchComment();
    }
  }, [commentId]);
  
  if (!comment) return <div>Loading comment...</div>;
  
  return (
    <div className="comment">
      <h3>Comment by {comment.author}</h3>
      <div className="comment-content">
        {/* Using dangerouslySetInnerHTML, but ONLY with sanitized content */}
        <div dangerouslySetInnerHTML={{ __html: sanitizedContent }} />
      </div>
      <p className="timestamp">Posted on: {new Date(comment.timestamp).toLocaleString()}</p>
    </div>
  );
}

// SERVER-SIDE API ENDPOINT
// pages/api/comments/[id].js
export default async function handler(req, res) {
  const { id } = req.query;
  
  // Fetch comment from database (pseudocode)
  const comment = await db.comments.findUnique({
    where: { id: parseInt(id) }
  });
  
  // Always validate IDs and other parameters
  if (!comment) {
    return res.status(404).json({ error: 'Comment not found' });
  }
  
  res.status(200).json(comment);
}

5. Use Built-in CSRF Protection

CSRF attacks trick users into performing unwanted actions on a site where they’re authenticated. If your app has forms or actions that alter state (logins, payments, data changes), this isn’t just recommended—it’s mandatory.

Why It Matters

Without CSRF protection, attackers can force users to perform actions like changing their email, making payments, or deleting their account, without their knowledge or consent.

How to Handle It Right

USE ESTABLISHED LIBRARIES! Tell your AI to:

Again you don’t need to, nor should, implement CSRF protection yourself. Use established libraries.

Conversation with Your AI Assistant:

YOU: "Let's add a form to update user profile information"
AI: "I'll create that form for you."
YOU: "Make sure we have CSRF protection"
AI: "I'll use Flask-WTF's CSRF protection and ensure the token is included in all forms. I'll also set cookies with SameSite=Lax to provide additional protection."

Good Example (Python with Flask)

# ✅ Using established libraries for CSRF protection
from flask import Flask, request, render_template
from flask_wtf.csrf import CSRFProtect

app = Flask(__name__)
app.config['SECRET_KEY'] = "strong_secret_key_from_environment"
csrf = CSRFProtect(app)  # Automatically adds CSRF protection

@app.route('/profile', methods=['GET', 'POST'])
def profile():
    # CSRF protection is automatically handled by Flask-WTF
    if request.method == 'POST':
        # Process the form
        pass
    return render_template('profile.html')

6. Set Proper Security Headers

Security headers tell browsers how to behave when handling your site’s content. They’re easy to overlook when vibe coding (and in general).

Why It Matters

Security headers can prevent various attacks like XSS, clickjacking, and man-in-the-middle attacks by instructing browsers to enforce certain security policies.

How to Handle It Right

USE ESTABLISHED LIBRARIES! Tell your AI to:

Libraries like Helmet.js (Node.js) or Flask-Talisman (Flask) offer secure defaults out-of-the-box—use them rather than configuring headers manually.

Conversation with Your AI Assistant:

YOU: "Let's make sure our app has proper security headers"
AI: "I'll add those now."
YOU: "Use an established library for this"
AI: "I'll implement Flask-Talisman which will automatically set up CSP, HSTS, X-Frame-Options, and other security headers following best practices."

Good Example (Node.js with Helmet.js)

// ✅ Using established libraries for security headers
const express = require('express');
const helmet = require('helmet');

const app = express();

// Configure security headers using helmet
app.use(helmet());

// Custom CSP configuration similar to the Python example
app.use(
    helmet.contentSecurityPolicy({
        directives: {
            defaultSrc: ["'self'"],
            scriptSrc: ["'self'"],
            styleSrc: ["'self'"],
            imgSrc: ["'self'", "data:"]
        }
    })
);

// Force HTTPS
app.use((req, res, next) => {
    if (!req.secure && req.get('x-forwarded-proto') !== 'https') {
        return res.redirect('https://' + req.get('host') + req.url);
    }
    next();
});

// Set X-Frame-Options to DENY
app.use(helmet.frameguard({ action: 'deny' }));

const port = process.env.PORT || 3000;
app.listen(port, () => {
    console.log(`Server running on port ${port}`);
});

7. Validate All Input Data

The classic “never trust user input” rule. Validate all inputs on both the client and server side. Client-side validation improves user experience, but server-side validation is crucial for security, as client-side validation can easily be bypassed.

Why It Matters

Without proper validation, malicious users can submit unexpected data that could lead to security vulnerabilities, data corruption, or application crashes.

How to Handle It Right

USE ESTABLISHED LIBRARIES! Tell your AI to:

Conversation with Your AI Assistant:

YOU: "We need to accept user registration data"
AI: "I'll set up the registration endpoint."
YOU: "Make sure we validate all input properly"
AI: "I'll use Pydantic to create a validation schema that checks email format, ensures password strength, and validates all other fields before processing any data."

Good Example (Python with FastAPI/Pydantic)

# ✅ Using established validation libraries
from fastapi import FastAPI
from pydantic import BaseModel, EmailStr, validator
import re

app = FastAPI()

class UserRegistration(BaseModel):
    username: str
    email: EmailStr
    password: str
    
    @validator('username')
    def username_must_be_valid(cls, v):
        if not re.match(r'^[a-zA-Z0-9_]{3,20}$', v):
            raise ValueError('Username must be 3-20 characters and alphanumeric')
        return v
        
    @validator('password')
    def password_must_be_strong(cls, v):
        if len(v) < 8:
            raise ValueError('Password must be at least 8 characters')
        # More validation rules...
        return v

@app.post("/register/")
async def register(user: UserRegistration):
    # Data is automatically validated by Pydantic
    # Safe to use the data now
    return {"status": "registered"}

8. Implement Proper Error Handling and Logging

When vibe coding, it’s tempting to focus only on the happy path and ignore error cases. This can expose sensitive information.

Why It Matters

Improper error handling can leak sensitive data through stack traces, help attackers understand your system, or create denial of service vulnerabilities.

How to Handle It Right

USE ESTABLISHED LIBRARIES! Tell your AI to:

Conversation with Your AI Assistant:

YOU: "Make sure we handle errors properly in our app"
AI: "I'll implement comprehensive error handling."
YOU: "We need to log errors but not expose sensitive details to users"
AI: "I'll use the logging module with proper log levels, ensure we catch all exceptions, and only show generic error messages to users while logging the full details internally."

Good Example (Node.js with Winston)

// ✅ Proper error handling and logging
const express = require('express');
const winston = require('winston');
const { v4: uuidv4 } = require('uuid');

// Configure logging
const logger = winston.createLogger({
    level: 'info',
    format: winston.format.combine(
        winston.format.timestamp(),
        winston.format.json()
    ),
    transports: [
        new winston.transports.Console()
    ]
});

const app = express();

// Add request ID middleware
app.use((req, res, next) => {
    req.id = uuidv4();
    res.locals.requestId = req.id;
    next();
});

// Global error handler
app.use((err, req, res, next) => {
    // Log the error with request details
    logger.error('Error processing request', {
        requestId: res.locals.requestId,
        error: err.message,
        stack: err.stack
    });

    // Return a generic error to the user
    res.status(500).json({
        error: 'An unexpected error occurred',
        reference_id: res.locals.requestId
    });
});

// Example protected route
app.get('/api/data', (req, res, next) => {
    try {
        // Your route logic here
        throw new Error('Something went wrong');
    } catch (err) {
        next(err); // Pass to error handler
    }
});

const port = process.env.PORT || 3000;
app.listen(port, () => {
    logger.info(`Server running on port ${port}`);
});

9. Implement Proper File Upload Security

Why It Matters

Insecure file uploads can lead to server-side vulnerabilities including remote code execution, stored XSS attacks, and denial of service. Without proper restrictions, attackers could upload malicious files that compromise your entire system.

How to Handle It Right

USE ESTABLISHED LIBRARIES! Tell your AI to:

Conversation with Your AI Assistant:

YOU: "We need to add a profile picture upload feature"
AI: "I'll set that up for you."
YOU: "Make sure we implement proper file upload security"
AI: "I'll use secure file handling with strict MIME type validation, file size limits, and ensure files are stored securely with randomized names outside the web root."

Good Example (Python with Flask)

# ✅ Secure file uploads
import os
import uuid
from flask import Flask, request, redirect
from werkzeug.utils import secure_filename
import magic  # python-magic library for MIME type checking

app = Flask(__name__)
# Store files outside webroot
UPLOAD_FOLDER = '/var/data/uploads'
ALLOWED_EXTENSIONS = {'png', 'jpg', 'jpeg', 'gif'}
MAX_CONTENT_LENGTH = 2 * 1024 * 1024  # 2MB limit

app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
app.config['MAX_CONTENT_LENGTH'] = MAX_CONTENT_LENGTH

def allowed_file(filename, filedata):
    # Check extension
    valid_extension = '.' in filename and \
                      filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS
    
    # Check actual file content MIME type
    mime = magic.Magic(mime=True)
    mime_type = mime.from_buffer(filedata.read(1024))
    filedata.seek(0)  # Reset file pointer
    
    valid_mime = mime_type.startswith('image/')
    
    return valid_extension and valid_mime

@app.route('/upload', methods=['POST'])
def upload_file():
    if 'file' not in request.files:
        return redirect(request.url)
        
    file = request.files['file']
    
    if file.filename == '':
        return redirect(request.url)
        
    if file and allowed_file(file.filename, file):
        # Generate secure random filename
        original_extension = file.filename.rsplit('.', 1)[1].lower()
        new_filename = f"{uuid.uuid4().hex}.{original_extension}"
        
        # Ensure upload directory exists
        os.makedirs(app.config['UPLOAD_FOLDER'], exist_ok=True)
        
        # Save the file
        file_path = os.path.join(app.config['UPLOAD_FOLDER'], new_filename)
        file.save(file_path)
        
        # Store the path in database (not the full server path)
        db_path = f"/uploads/{new_filename}"
        # save_to_database(user_id, db_path)
        
        return {"status": "success", "file_path": db_path}
        
    return {"status": "error", "message": "Invalid file"}

10. Implement Rate Limiting and Throttling

Rate limiting is likely to be forgotten during vibe coding sessions but it is essential for preventing abuse.

Why It Matters

Without rate limiting, attackers can bombard your application with requests, potentially causing denial of service, brute force password attacks, or scraping your data.

How to Handle It Right

USE ESTABLISHED LIBRARIES! Tell your AI to:

Conversation with Your AI Assistant:

YOU: "Let's make sure we have rate limiting on our login endpoint"
AI: "I'll add that protection."
YOU: "Use an established library for this"
AI: "I'll implement Flask-Limiter which provides robust rate limiting based on client IP address and can be customized for different endpoints."

Good Example (Python with Flask)

# ✅ Using established libraries for rate limiting
from flask import Flask
from flask_limiter import Limiter
from flask_limiter.util import get_remote_address

app = Flask(__name__)

limiter = Limiter(
    app,
    key_func=get_remote_address,
    default_limits=["200 per day", "50 per hour"]
)

@app.route('/login', methods=['POST'])
@limiter.limit("5 per minute")  # Stricter limits for sensitive endpoints
def login():
    # Login logic here
    pass

@app.route('/api/data')
@limiter.limit("1 per second")
def get_data():
    # API data logic here
    pass

Conclusion: Talking Security with Your AI Pair Programmer

Remember these conversation starters with your AI coding assistant:

  1. “Make sure we’re not storing any sensitive data in the frontend”
  2. “Use an established library for authentication”
  3. “Protect against SQL injection by using parameterized queries”
  4. “Ensure we’re sanitizing any user-generated content to prevent XSS”
  5. “Implement CSRF protection for all forms and state-changing requests”
  6. “Set up proper security headers using an established library”
  7. “Validate all input with a proper schema validation library”
  8. “Implement proper error handling and logging, without exposing details to users”
  9. “Force all connections to use HTTPS”
  10. “Add rate limiting, especially on sensitive endpoints like login”