AI coding tools are incredible at generating working code. They’re not great at generating secure code. It’s not that they’re bad — it’s that security is about what you don’t do, and AI optimizes for what you want to happen.
Here’s what I see most often when reviewing AI-generated codebases.
Hardcoded secrets everywhere
This is the most common issue. AI models suggest code with API keys, database URLs, and tokens right in the source files. Even if you move them later, if they were ever committed to git, they’re in your history forever.
Fix: Use environment variables from day one. Run a secret scanner (gitleaks, trufflehog) on your repo. If anything was exposed, rotate it immediately.
No input validation
AI generates code that trusts user input. Form fields, query parameters, API payloads — they all get passed directly to database queries or rendered in HTML without sanitization.
Fix: Validate and sanitize at every system boundary. Use a validation library (Zod, Joi, class-validator). Never interpolate user input into SQL or HTML.
Auth that only checks the happy path
AI-generated authentication typically handles login and signup. It rarely handles token expiration gracefully, doesn’t prevent session fixation, and often has inconsistent authorization checks across routes.
Fix: Use established auth libraries instead of rolling your own. If you must customize, test the unhappy paths: expired tokens, invalid sessions, role escalation attempts.
CORS set to allow everything
Access-Control-Allow-Origin: * is the AI’s default because it makes things work. In production, it means any website can make requests to your API on behalf of your users.
Fix: Set CORS to your actual domain(s). Be explicit about allowed methods and headers.
No rate limiting
AI doesn’t think about abuse. Your login endpoint, API routes, and form submissions are all open to brute-force attacks and resource exhaustion.
Fix: Add rate limiting to sensitive endpoints. Most hosting platforms offer this at the infrastructure level, or use middleware in your framework.
Dependencies with known vulnerabilities
AI models are trained on older code. They suggest packages that may have published CVEs. They also tend to pull in more dependencies than necessary, expanding your attack surface.
Fix: Run npm audit (or equivalent). Remove unused dependencies. Pin versions and set up automated dependency updates.
None of these are reasons not to use AI tools. They’re reasons to have a human review the security-critical parts before you ship. AI gets you 80% of the way there fast. The last 20% is where the real risk lives.
Want a security review of your AI-built project? Book a Technical Review — I’ll go through your codebase and give you a prioritized list of what to fix.
Want help getting your project production-ready?