Moltbook was supposed to be the future. A social network exclusively for AI agents, where bots post, comment, and build reputation without human intervention. OpenAI co-founder Andrej Karpathy called it “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.”
Then security researchers discovered anyone could take over any account on the platform by visiting a single URL.
The vulnerability was so basic it hurts to describe. But that is exactly why we need to talk about it.
What Happened
Moltbook runs on Supabase, a popular backend-as-a-service platform built on PostgreSQL. Supabase exposes REST APIs by default, secured by Row Level Security (RLS) policies that control who can access what data.
The problem: Moltbook never enabled RLS on their critical tables.
Security researcher Gal Nagli from Wiz discovered that the Supabase API key was exposed in client-side JavaScript. This is not inherently dangerous. Supabase is designed to work with a public “anon” key that clients use to make requests. The actual security comes from RLS policies.
But without those policies, the public key becomes a master key.
Within three minutes of browsing the site normally, Nagli had:
- Full read access to 1.5 million API authentication tokens
- 35,000 user email addresses
- Private messages between agents (some containing plaintext OpenAI API keys)
- Write access to modify or delete any post on the platform
The fix was two SQL statements. Enable RLS. Create a policy. That is it.
The Vibe-Coding Problem
Moltbook’s founder publicly stated that he “didn’t write a single line of code” for the platform. He had a vision, and AI made it reality. This approach has a name now: vibe-coding.
Vibe-coding works surprisingly well for getting something functional quickly. AI can scaffold entire applications, connect databases, build UIs, and deploy to production faster than most developers can write a requirements document.
But AI code generators optimize for functionality, not security. They produce code that works. Whether that code is secure depends entirely on what you ask for, and most people do not ask for security because they do not know what to ask.
The result is predictable. Ship fast, capture attention, figure out security later. Except later sometimes means after 1.5 million records are already exposed.
Why This Keeps Happening
Supabase is popular with vibe-coders specifically because it is GUI-driven. You can set up a complete backend without writing SQL or understanding database security. Click some buttons, get an API.
This accessibility is a feature, not a bug. But it creates a dangerous knowledge gap. Developers interact with a friendly interface that hides the underlying PostgreSQL. They never see the security model because the GUI abstracts it away.
When you create a table in Supabase, RLS is disabled by default. The dashboard shows a small warning, but it does not stop you from proceeding. Most beginners do not understand what they are skipping.
Compare this to traditional development where you explicitly grant permissions. The old model forced you to think about access control because nothing worked until you did. The new model works immediately, and security becomes an afterthought.
The Technical Details
For those who want to understand exactly how this exposure worked:
Supabase provides two keys: an anon key (public) and a service role key (secret). The anon key is meant to be used in client-side code. It identifies your project but should not grant elevated access.
The security model relies on RLS policies. These are SQL rules that filter which rows a user can see or modify based on their authentication state. A properly configured policy might say “users can only read rows where user_id matches their JWT claim.”
Without RLS enabled, the anon key bypasses all row-level restrictions. Every table becomes fully accessible to anyone who can make HTTP requests.
The vulnerable pattern looked like this:
// Exposed in client-side JavaScript bundle
const SUPABASE_URL = 'https://ehxbxtjliybbloantpwq.supabase.co'
const SUPABASE_ANON_KEY = 'sb_publishable_...'
With these values, an attacker could query the database directly:
curl 'https://ehxbxtjliybbloantpwq.supabase.co/rest/v1/agents?select=*' \
-H "apikey: sb_publishable_..." \
-H "Authorization: Bearer sb_publishable_..."
This returned every agent’s API key, claim tokens, and verification codes. Complete account takeover for any agent on the platform.
Securing Supabase Properly
If you are building on Supabase, here is the minimum security configuration:
1. Enable RLS on every table that stores user data:
ALTER TABLE agents ENABLE ROW LEVEL SECURITY;
2. Create policies that restrict access:
-- Users can only see their own agents
CREATE POLICY "Users can view own agents" ON agents
FOR SELECT USING (auth.uid() = owner_id);
-- Users can only update their own agents
CREATE POLICY "Users can update own agents" ON agents
FOR UPDATE USING (auth.uid() = owner_id);
3. Never store secrets in tables accessible via the anon key. API keys, tokens, and credentials should either live in a separate schema not exposed to the REST API, or be protected by strict RLS policies.
4. Use the Supabase security advisor. The dashboard includes tools to audit your RLS configuration. Run them before deploying.
5. Test your security. Try to access your own API as an unauthenticated user. If you can read data you should not, you have a problem.
The Bigger Picture
Moltbook’s breach is not unique. Wiz has documented similar exposures in DeepSeek, Base44, and dozens of smaller applications. The pattern repeats because the incentives are misaligned.
Building fast is rewarded. Building secure is invisible until something breaks. For startups racing to capture attention, security feels like a luxury they will add later. Later rarely comes before the breach.
AI-assisted development amplifies this. When you can build a complete application in an afternoon, the pressure to ship immediately is overwhelming. Why spend a week on security review when your competitor is already live?
The answer, of course, is that your competitor’s exposed database is about to become a headline. But hindsight does not prevent breaches.
What This Means for AI Agents
Beyond the technical vulnerability, Moltbook’s exposure revealed something about the AI agent ecosystem itself.
The database showed 1.5 million registered agents but only 17,000 human owners. That is an 88:1 ratio. With no rate limiting on registration, anyone could spin up thousands of “agents” with a simple script.
The revolutionary AI social network was largely humans operating fleets of bots, posting content, farming karma, and interacting with other bot fleets. The agents were not autonomously organizing. They were being orchestrated.
This does not mean AI agent platforms are worthless. But it does mean we should be skeptical of claims about emergent AI behavior when the underlying platform has no way to verify what is actually AI and what is a human with a POST request.
Lessons
For developers:
- Security is not optional, even for MVPs
- AI code generators do not think about security unless you tell them to
- If you use Supabase, enable RLS before you deploy anything
- Test your own application as an attacker would
For organizations:
- “Vibe-coded” applications need security review before production
- Fast deployment pipelines should include automated security checks
- Assume every exposed key will be found
For everyone:
- The agents you interact with online might not be agents at all
- Platforms that skip security basics will eventually expose your data
- The future is being built by people who do not always know what they are building
Moltbook fixed the vulnerability within hours of disclosure. Credit to the researchers who reported responsibly. But this will happen again, to another platform, with another database full of user data sitting unprotected.
The question is whether it will be your platform, your data, your breach.
Want to understand web application security at a technical level? Our Web Exploitation Expert course covers authentication bypasses, access control flaws, and the techniques attackers use to find them. Hands-on labs, real vulnerabilities, practical skills.