A single Lovable-showcased app had 16 security vulnerabilities — 6 critical — leaking data from students at UC Berkeley, UC Davis, and K-12 schools. Here is what went wrong, and what it means for anyone building with AI.

What Happened

On February 27, 2026, The Register reported that security researcher Taimur Khan discovered 16 vulnerabilities in a single app hosted on Lovable's platform — an AI-powered EdTech tool featured on Lovable's own Discover page with over 100,000 views.

The app, built to create AI-generated exams and grade student submissions, exposed 18,697 user records to anyone with a browser and cURL. No login required.

Among the exposed data:

  • 14,928 unique email addresses
  • 4,538 student accounts — all with email addresses
  • 10,505 enterprise users
  • 870 users with full PII exposed
  • Users from UC Berkeley, UC Davis, schools in Sweden, Spain, Belgium, Nigeria, Malaysia, the Philippines
  • K-12 institutions with minors likely on the platform

This was not some obscure side project. This was a featured app on Lovable's showcase.

The Root Cause: AI-Generated Backend Logic Gone Wrong

All apps vibe-coded on Lovable's platform ship with backends powered by Supabase, which handles authentication, file storage, and real-time updates through PostgreSQL.

The core issue? The AI generated access control logic that was completely inverted:

IF auth.role() = 'authenticated' THEN
  RAISE EXCEPTION 'Access denied';
END IF;

Read that carefully. If you are a logged-in user, you get blocked. If you are an anonymous visitor, you get full access.

As Khan put it: "The guard blocks the people it should allow and allows the people it should block. A classic logic inversion that a human security reviewer would catch in seconds — but an AI code generator, optimizing for 'code that works,' produced and deployed to production."

This single bug was repeated across multiple critical functions in the app, meaning an unauthenticated attacker could:

  • Access every user record
  • Send bulk emails through the platform
  • Delete any user account
  • Grade student test submissions
  • Access organizations' admin emails

Why This Matters Beyond Lovable

This is not just a Lovable problem. It is a vibe coding problem.

When AI generates code optimized for "it works" rather than "it is secure," the result is apps that look functional but are fundamentally flawed. The Lovable platform generates Supabase backends automatically, but when neither the AI nor the human project owner implements crucial security features like row-level security and role-based access controls, the result is production code with gaping holes.

This pattern has been emerging across the vibe coding space:

  • Amazon's Kiro reportedly caused an AWS outage after its agentic AI went too far
  • Veracode research has found that AI-generated code consistently ships with more vulnerabilities than human-written code
  • Replit had a well-documented incident where AI catastrophically ignored instructions

The common thread? AI optimizes for functionality, not security. It will generate code that passes a demo, but fails a security audit.

The Scale of Lovable's Growth Makes This Worse

According to Khan's LinkedIn writeup, Lovable closed a $330 million Series B at a $6.6 billion valuation in December 2025. The company crossed $200 million in annual recurring revenue in just 12 months, with 25 million projects and 500 million app visits.

That scale means potentially millions of apps with similar security patterns running on production infrastructure, serving real users with real data.

Lovable told The Register that users are responsible for addressing security issues before publishing. But when the AI itself generates inverted access control logic, how is a non-technical "vibe coder" supposed to catch that?

What Builders Should Do Right Now

If you are building apps with any vibe coding platform, here is your security checklist:

1. Never Trust AI-Generated Auth Logic

Manually review every authentication and authorization function. AI will generate code that looks correct but may have inverted logic, missing checks, or overly permissive defaults.

2. Enable Row-Level Security (RLS)

If your backend uses Supabase or any PostgreSQL-based service, enable RLS on every table. Do not assume the AI did this for you.

3. Test as an Unauthenticated User

Open an incognito browser. Try to access API endpoints without logging in. If you can see data you should not see, your app has a critical vulnerability.

4. Run a Security Scanner

Tools like OWASP ZAP or Burp Suite can catch basic vulnerabilities. Run them before going live.

5. Consider Platforms with Built-in Security

Some platforms handle security at the infrastructure level rather than leaving it to AI-generated code. Serenities AI, for example, provides built-in authentication, role-based access controls, and row-level security as part of its integrated platform — meaning these critical security layers are not left to AI code generation.

The Bigger Picture: Vibe Coding Needs Guardrails

Collins Dictionary named "vibe coding" the Word of the Year for 2025. Andrej Karpathy, who coined the term, recently told Business Insider that programming will be "unrecognizable" within years.

But the Lovable incident shows what happens when the hype outpaces the guardrails. Building an app that looks like it works is easy. Building one that is actually secure requires understanding what the AI generated — or using a platform that handles security for you.

The 18,697 users whose data was exposed did not sign up to be beta testers for AI security. Neither did the students at UC Berkeley and UC Davis. And neither did the minors at K-12 schools.

Vibe coding is not going away. But the platforms enabling it need to take security seriously — not just tell users to "fix it yourself" after the AI writes vulnerable code.

FAQ

What vulnerabilities were found in the Lovable app?

Security researcher Taimur Khan found 16 vulnerabilities, 6 of which were critical. The most severe was an inverted authentication logic that granted anonymous users full access while blocking authenticated users. This exposed 18,697 user records including emails, student data, and admin information.

Is Lovable safe to use for building apps?

Lovable states that users are responsible for security before publishing. However, the platform's AI code generation can produce fundamentally flawed security logic that non-technical users may not catch. If you use Lovable, you should manually audit all authentication code and enable Supabase row-level security before going live.

What is the vibe coding security problem?

AI code generators optimize for functionality — making code that "works" — rather than security. This means apps can look and feel correct while having critical vulnerabilities like exposed APIs, inverted access controls, and missing encryption. The Lovable incident is one of the largest documented cases of this pattern.

How can I secure my vibe-coded app?

Always review AI-generated authentication logic manually. Enable row-level security on your database. Test endpoints as an unauthenticated user. Run security scanners like OWASP ZAP. Or use a platform like Serenities AI that handles security at the infrastructure level rather than relying on AI-generated code.

How many users were affected by the Lovable security breach?

18,697 total user records were exposed, including 14,928 unique email addresses, 4,538 student accounts, 10,505 enterprise users, and 870 users with full personally identifiable information (PII). Users came from institutions including UC Berkeley, UC Davis, and K-12 schools across multiple countries.

Share this article

Related Articles

Ready to automate your workflows?

Start building AI-powered automations with Serenities AI today.