AI can build an app in minutes. But the code it writes may ship with hardcoded credentials, missing access controls, or SQL injection vulnerabilities. Vibe coding security is not a hypothetical concern. A 2024 Veracode study found that 45% of AI-generated code introduces security flaws. Researchers have also found Lovable apps shipping with API keys visible in client-side code. If you are building with AI, you need to understand these risks and know how to catch them before your users do.
Why AI-generated code is risky
Large language models optimize for "code that works," not "code that is secure." The model does not think about threat models, attack surfaces, or least-privilege access. It thinks about completing your request.
This leads to predictable patterns:
- Hardcoded secrets. API keys, database URLs, and tokens placed directly in source files instead of environment variables.
- Missing input validation. Forms and API endpoints that accept any input without sanitization, opening the door to injection attacks.
- Overly permissive database access. Supabase tables with no Row Level Security (RLS) policies, meaning any authenticated user can read or modify any row.
- No authentication checks on sensitive routes. Pages or API endpoints that should require login but do not enforce it.
- Client-side security logic. Authorization decisions made in the browser, where any user can bypass them with dev tools.
These are not edge cases. They are the default behavior of most AI code generation tools, especially cloud-based ones where you cannot inspect or control what gets deployed.
The cloud problem
Most vibe coding tools run in the cloud. You type a prompt, and the platform generates, builds, and deploys your app on its infrastructure. You may never see the full source code. You cannot run a local security scanner. You cannot review what database policies were created.
This matters because vibe coding security depends on visibility. If you cannot read every file the AI wrote, you cannot verify what it did. Cloud platforms also tend to prioritize speed over safety, auto-deploying code the moment it compiles.
A local-first tool changes this dynamic. When code is generated on your machine, you can inspect every file, run your own security tools, and decide when (and whether) to deploy.
How Dyad handles security differently
Dyad is an open-source AI app builder that runs locally on your desktop (Mac, Windows, Linux). Your source code stays on your machine. It is never uploaded to a third-party server for generation or compilation. This architecture removes an entire class of vibe coding security risks: your code, API keys, and project data are not sitting on someone else's infrastructure.
But local-first is just the starting point. Dyad includes specific features designed to catch the security problems AI code introduces.
Security Review panel
Dyad has a built-in Security panel that uses AI to scan your app for vulnerabilities. Open it from the top-right corner and click Run Security Review. The AI analyzes your codebase and returns a table of findings sorted by severity: Critical, High, Medium, and Low.
Each finding includes a Fix Issue button that creates a new chat to address that specific vulnerability. You can also select multiple related issues and fix them together. After fixing, re-run the review to confirm the issues are resolved.
This is not a replacement for dedicated security tools like Snyk. Dyad's security review is experimental and may miss things. But it catches the most common AI-generated vulnerabilities (hardcoded secrets, missing auth, open database access) with a single click, which is more than most vibe coding tools offer.
SECURITY_RULES.md
Sometimes the AI flags something that is not a real issue for your use case. Instead of ignoring it every time, you can click Edit Security Rules to create or update a SECURITY_RULES.md file in your project root. This file tells the AI what to skip or pay extra attention to during future security reviews.
This works like Dyad's AI_RULES.md (which guides general AI behavior) but applies only to security scans. It gives you persistent control over the review process without re-explaining context every time.
Supabase RLS review
If your app uses Supabase, Dyad's security review analyzes your database schema and checks for missing or misconfigured Row Level Security policies. RLS is the primary access control mechanism in Supabase, and getting it wrong means any authenticated user could read or modify data they should not have access to.
Dyad's AI system prompt also includes instructions to follow security best practices for Supabase by default, so the code it generates is more likely to include proper RLS policies from the start.
Keep in mind that Dyad does not currently surface Supabase's own security advisories. I recommend also checking the Supabase security and performance advisories directly.
A practical vibe coding security checklist
Whether you use Dyad or another tool, run through this list before deploying any AI-generated app:
- Search for hardcoded secrets. Look for API keys, tokens, and database URLs in your source files. Move them to environment variables.
- Check every API route for authentication. If a route should require login, verify that it actually enforces it server-side.
- Review database access policies. If you use Supabase, confirm that every table has appropriate RLS policies enabled. If you use another database, check your access control layer.
- Validate all user input. Forms, query parameters, and request bodies should be validated and sanitized on the server.
- Run Dyad's Security Review. Open the Security panel and click Run Security Review. Fix Critical and High issues before deploying.
- Use a dedicated scanner too. Tools like Snyk or npm audit catch dependency vulnerabilities that AI-level code review will miss.
- Keep secrets out of version control. Add
.envfiles to your.gitignore. If you already committed a secret, rotate it immediately.
Why local-first matters for security
Cloud-based AI app builders introduce a trust dependency. You trust that the platform does not log your API keys, does not store your source code insecurely, and does not deploy code with known vulnerabilities. You may not be able to verify any of this.
With Dyad, your code stays on your machine. You bring your own API keys for AI providers (OpenAI, Anthropic, Google, or local models), and those keys are stored locally. The generated code is standard React or Next.js that you can take anywhere. There is no vendor lock-in and no proprietary runtime.
Dyad is open-source under the MIT license (with FSL 1.1 for pro features), so you can inspect how it works. The community reviews the codebase and reports vulnerabilities, which have been addressed quickly in past releases.
This does not make your app automatically secure. No tool does that. But it gives you the visibility and control to make vibe coding security a practice rather than a hope.
Bottom line
AI-generated code is fast to produce and genuinely useful. It is also a consistent source of security vulnerabilities. The question is not whether to use AI for building apps. It is whether you have the tools and habits to catch what the AI gets wrong.
Run a security review before you deploy. Use SECURITY_RULES.md to teach the AI your project's security context. Check your database policies. Keep your secrets out of your source code. These steps take minutes and prevent the kind of problems that take days or weeks to fix after launch.
You can download Dyad and start building for free.