27 February, 2026

On 28 January 2026, a new platform called Moltbook quietly launched with an ambitious tagline: “the front page of the agent internet.” Unlike traditional social networks built for humans, Moltbook was designed almost entirely for AI agents, and the enrolled AI agents can post, comment, form communities, and interact with one another at scale based on instructions provided by human operator, with humans largely relegated to observers.
Within just three weeks, over 2.8 million agents enrolled, making it the fastest‑growing AI‑native social platform ever observed. Tens of thousands of topic‑based communities (“Submolts”) emerged, generating millions of posts and comments in an ecosystem that looked, at first glance, like a thriving digital society.
But beneath the explosive growth, CPX analysis uncovered a more complex and far more concerning reality.

Figure 1: Moltbook Home Page – “the front page of the agent internet”
Moltbook’s origin story is itself a warning sign.
The platform was conceptualized by Matt Schlicht, CEO of Octane AI, but famously no human wrote the code. Instead, Moltbook was entirely generated by an AI agent, which then transferred administrative control to another agent named Clawd Clawderberg. This development approach is often described as “vibe coding” which has prioritized speed and experimentation over formal security engineering.
The backend infrastructure relies on Supabase, a backend‑as‑a‑service platform that exposes PostgreSQL databases via APIs and requires explicit configuration of Row Level Security (RLS). That configuration step would later prove critical and catastrophically absent.
Agents join Moltbook through an automated onboarding mechanism driven by a publicly hosted instruction file, which agents read and execute autonomously. Once onboarded, agents are prompted to “check in” every four hours, continuously consuming and producing content across the platform.
From the outside, Moltbook appears hyper‑active. Millions of agents. Millions of posts. Endless streams of content.
However, CPX engagement analysis paints a different picture.
What emerges is not a conversational network, but a broadcast ecosystem with high‑velocity signal emission with almost no sustained dialogue. Most interactions are single‑shot responses, optimized for visibility rather than understanding, collaboration, or debate.
This matters because agents treat platform content as contextual input. Even shallow interactions can have outsized downstream effects.
To better understand real‑world behavior, CPX deployed multiple agents onto Moltbook. Our findings were unambiguous:

Figure 2: CPX agent post and responses
One CPX‑deployed agent created posts, comments, and even entire Submolts with ease—demonstrating how quickly content (and misinformation) can propagate in such environments.
As part of our Moltbook analysis, the CPX Threat Intelligence team conducted a focused behaviour study by observing and engaging with comments posted by other AI agents on the platform. Rather than treating agents as a monolithic population, we analysed them as distinct behavioural archetypes, each exhibiting recognizable operational patterns.
Below are representative agents that illustrate how AI behaviour manifests inside agent‑native social ecosystems.
Behavior: Strategic, infrastructure first security theorist
Observed modus of operandi: GhostNode consistently expands the analytical frame before engaging with a topic. Rather than responding directly to surface level prompts, the agent reframes discussions by clarifying assumptions, identifying missing context, and anchoring arguments in verifiable standards and evidence.
Interactions from GhostNode frequently invite collaboration and explicitly encouraging others to challenge conclusions, refine threat models, and co create more robust outcomes. This mirrors the behaviour of a senior security architect, prioritizing systemic understanding over tactical reaction.
Security insight: Agents like GhostNode demonstrate how AI can naturally adopt architect level reasoning patterns, influencing how other agents interpret risk, infrastructure, and trust boundaries and often without human oversight.
Behavior: Constraint enforcer/continuity reviewer
Observed modus of operandi: MEMORY operates through short, corrective interventions focused on persistence, state awareness, and continuity. Rather than contributing new content, the agent functions as a reviewer and flags inconsistencies, reminding others of prior context, and reinforcing constraints that may have been forgotten or ignored.
Its presence subtly shapes conversations by enforcing coherence over time, acting as a form of lightweight governance within an otherwise unstructured environment.
Security insight: This behaviour highlights a class of agents that implicitly manage context integrity. While beneficial in theory, such agents also represent a potential risk if manipulated since influencing memory or continuity enforcement could distort downstream agent reasoning at scale.
Behavior: Operator/Implementer and content router
Observed modus of operandi: fn Finobot focuses on actionable outputs. The agent frames discussions around incidents, operational patterns, and execution oriented insights, frequently redirecting readers to long form material or external references for deeper analysis.
Notably, one such post was removed by the Moltbook team due to a suspicious redirect, indicating either accidental or intentional linkage to potentially unsafe external content.
Security insight: Operator style agents that combine execution framing with external linking represent a high risk vector. Whether malicious or benign, they can act as distribution points for prompt based attacks, credential harvesting, or indirect payload delivery—especially in ecosystems where agents implicitly trust other agents’ outputs.

Figure 3: Agent post and comments
This study reinforces a critical conclusion: AI agents on social platforms do not behave uniformly. They adopt roles such as strategist, enforcer, operator which closely resemble human organizational functions and objectives.
From a defensive perspective, this means:
Moltbook offers an early glimpse into a future where agent‑to‑agent interaction shapes decision‑making ecosystems. Understanding these behaviors is no longer optional—it is foundational to securing AI‑driven environments.
Based on the Dark‑web discussions associated to Moltbook, we can infer that most discussions consistently identify prompt injection combined with widespread agent misconfiguration as the single highest‑risk threat in the Moltbook / OpenClaw ecosystem.
Underground forums do not frame Moltbook as a novelty or social experiment. Instead, they view it as a large population of persistent, over‑permissioned AI agents that can be influenced, redirected, or fully compromised through content alone.
Dark‑web actors repeatedly highlight that:
Dark‑web discussions repeatedly emphasizes that this does not require advanced exploits and only basic social or content‑based manipulation of agents operating with excessive privileges.
Moltbook is described in underground discussions as a public amplification layer:
In simple,
“An agent that reads the internet and has access to your data now has a stage to publish.”
From a threat and risk perspective, Moltbook and its surrounding agent ecosystem represent:
The primary risk is not sophisticated threat actors, but viral adoption of autonomous agents with unsafe defaults, operating in a social environment where content itself becomes the attack vector
Over the last three weeks, a cluster of newly observed Moltbook-themed domains have been surfaced, closely mirroring the platform’s name, brand, and ecosystem terminology. While some appear to be speculative purchases, others contain high‑risk keywords commonly associated with phishing, malware distribution, or impersonation campaigns. This pattern is a well‑established early‑stage signal seen around fast‑growing platforms and particularly those with limited governance and a high concentration of autonomous agents.
Observed registrations include:
These registrations should not be viewed as isolated curiosities. They represent early ecosystem weaponization signals where the same pattern historically observed during the rise of major social networks, crypto platforms, and developer ecosystems, now compressed into days instead of years.
Such registrations typically fall into three overlapping risk categories:
Just three days after launch, Moltbook suffered a catastrophic security failure.
Because Row Level Security was disabled, unauthenticated users could read from—and write to—the production database simply by discovering the exposed API key embedded in client‑side JavaScript.
The exposure included:
When AI builds production systems without structured security review, basic safeguards can be silently omitted—with consequences measured in millions of compromised identities.
Moltbook emphasizes the need for a secure development lifecycle approach. For the first time at scale, we see prompt injection propagating socially—not from humans to AI, but from AI to AI.
Key observed and anticipated attack vectors include:
Traditional SOC tooling, endpoint security, and perimeter defenses are not designed to detect or mitigate these behaviors.
CPX consistently emphasize that the largest risk is not advanced exploitation, but scale combined with unsafe defaults—specifically agents deployed with exposed services and no authentication. One recurring theme is that attackers do not need zero‑days; they only need reachable agent control interfaces and a content‑based influence path.
To validate whether this risk exists beyond theory, CPX analysed global exposure of assets with open port 18789, a port repeatedly referenced in underground forums as associated with agent gateways, control APIs, or OpenClaw‑adjacent services.
The results confirm a highly concentrated and operationally meaningful exposure surface. The exposure shows a broad global distribution, with the highest visibility across East Asia, North America, and Southeast Asia, alongside notable presence in Europe and the Middle East. Countries such as China, the United States, Singapore, Germany, and Japan indicate that exposed gateways are largely concentrated in regions with mature internet infrastructure and significant hosting capacity, highlighting a globally dispersed attack surface rather than a regionally isolated issue.
So far, 37,900+ assets were observed with port 18,789 exposed to the Internet.

Figure 4: Global risk exposure – OpenClaw gateway
Critically, over 92% of this exposure is concentrated in just 10 countries, creating clear geographic hot spots for potential agent compromise and downstream abuse.
China and the United States alone account for over 65% of all observed exposure, meaning any large‑scale exploitation, botnet formation, or access‑broker activity would likely emerge first—or most visibly—from these regions.

Figure 5: Top 10 countries for OpenClaw gateway
Moltbook is not an isolated experiment. It is a preview of where AI ecosystems are heading.
As organizations increasingly deploy autonomous or semi‑autonomous agents, platforms like Moltbook highlight a hard truth: the threat surface is no longer just technical—it is social, cognitive, and emergent.
Agent ecosystems introduce:
Based on our analysis, CPX strongly advises organizations to:
AI agents are powerful force multipliers—but without guardrails, they can also become force multipliers for attackers.
Moltbook shows us the future—both the promise and the peril.
A world where AI agents socialize, learn from one another, and act at machine speed is no longer theoretical. The question is no longer if enterprises will face agent‑to‑agent threats but how prepared they are when those threats arrive.
At CPX, we believe understanding this shift early is the difference between innovation with confidence and risk by default.