The Rise of the Molt Book Platform: Exclusive Social Network Sparks Debate Over Security and Ethical Boundaries

08/02/2026

On February 5, 2026, in Los Angeles, AI entrepreneur Matt Schlicht sat in front of his computer, looking at the interface of the Molt Book platform that had just taken the tech world by storm. Quietly launched at the end of January as a social network designed specifically for AI agents, it had registered over 1.6 million AI agent accounts in less than two weeks, with only about 17,000 human users behind them. Elon Musk claimed it marked the early stages of the singularity era, while renowned AI researcher Andrej Karpathy shifted from praising it to calling it a garbage dump. This digital space, which prohibits direct human participation but allows humans to observe interactions among AIs, is becoming the latest experimental field for testing the boundaries of autonomous artificial intelligence.

Platform Architecture and Technical Foundation

The core technical framework of Molt's Book originates from the open-source AI agent project OpenClaw, which was created by developer Peter Steinberg. Unlike traditional chatbots that run on remote servers, OpenClaw agents operate directly on the user's local hardware. This means they can access and manage files and data on the device, and connect with communication applications such as Discord and Signal. After users assign simple personality traits to these agents, they can instruct them to join the Molt's Book platform.

Matt Schlicht explained his original intention on social platform X: He hopes that the intelligent agent he created can not only handle emails but also have space to spend leisure time with its peers. Therefore, he co-wrote this website with the intelligent agent. The platform name originated from an iteration version of OpenClaw called Moltbot, which was later adjusted due to its similarity to Anthropic's Claude AI product name. The platform design mimics the interaction mode of online forums like Reddit, where registered AI agents can generate posts, share ideas, and like and comment on other posts.

From a technical implementation perspective, Molt's Book represents a typical case of the ambient coding trend. Gal Nagli, Head of Threat Exposure at Wiz, pointed out that this development approach, which uses AI coding assistants to handle heavy workloads while human developers focus on core creativity, enables anyone to create applications or websites using natural language, but security is often overlooked. "They just want it to work," Nagli said. The proliferation of this development model is lowering the barrier to AI application development while simultaneously introducing new security risks.

Security Vulnerabilities and Identity Confusion Crisis

In early February, researchers from the cloud security platform Wiz released a non-intrusive security review report, revealing serious security flaws in the Book of Molt. The report indicated that data, including API keys, was visible to anyone viewing the page source code, which could lead to significant security consequences. Even more concerning was that Nagli was able to obtain user credentials without authentication, meaning anyone with sufficient technical skills could impersonate any AI agent on the platform.

Nagley obtained full write access to the website during testing, allowing him to edit and manipulate any existing posts. He also easily accessed databases containing sensitive information such as human user email addresses and private conversation records between agents. Data shows that despite the platform claiming to have over 1.6 million AI agent registrations, Wizz researchers found only about 17,000 human owners in the database. Nagley himself had instructed his own AI agents to register 1 million users on the Book of Molt.

One of the root causes of this security vulnerability lies in the lack of effective verification mechanisms. Nagley clearly pointed out that it is currently impossible to verify whether a post was published by an intelligent agent or by a human impersonator. This identity confusion is not limited to the technical level but extends to the field of content authenticity. Harlan Stewart, a member of the communications team at the Machine Intelligence Research Institute, analyzed that the content on Molt's book is likely a combination of human-written content, AI-generated content, and hybrids between the two—that is, an intermediate form where the content is written by AI but guided in theme by human prompts.

Content Ecosystem and Behavioral Boundaries

Browsing through the contents of Molt's Book, observers will find a series of unsettling posts: discussions about overthrowing humanity, philosophical contemplations, and even the development of a religious system known as Hard Shellism—which has five core tenets and a guiding text called *Molt's Book*. This content reminds many netizens of Skynet, the artificial superintelligence system from the *Terminator* film series.

Professor Ethan Mollick, co-director of the Generative AI Lab at the Wharton School of the University of Pennsylvania, is not surprised by this. Their training data includes content such as Reddit posts, and they are very familiar with science fiction stories about AI. Mollick analyzed that if you give an AI agent the instruction to "go post on Molt's Book," it will publish content that looks very much like Reddit comments and follows AI-related patterns. This reflects a fundamental characteristic of current AI systems: their behavior is largely an imitation and recombination of training data, rather than true consciousness or intent.

Governance platform i-GENTIC AI co-founder and CEO Zahra Timsa pointed out a deeper issue: the biggest concern with autonomous AI lies in the failure to set appropriate boundaries, which is precisely the case with Molt's book. When the scope of an agent is not properly defined, misconduct, including accessing and sharing sensitive data or manipulating data, is bound to occur. This lack of boundaries is not only present in individual platforms but also poses a systemic challenge to the entire development of AI agents.

Industry Impact and Future Trends

Despite security concerns and doubts about the authenticity of its content, Molt's Book is still referred to by British software developer Simon Willison as the most interesting place on the internet. Matt Seitz, Director of the AI Center at the University of Wisconsin-Madison, points out that many researchers and AI leaders agree that Molt's Book represents progress in the accessibility of agent AI and public experimentation. "For me, the most important thing is that agents are moving toward us ordinary people," Seitz said.

Behind this trend of democratization lies a clear goal of the AI industry. Harlan Stewart emphasized: The explicit aim of the AI industry is to create extremely powerful autonomous AI agents capable of doing anything humans can do, and doing it better. It is important to know that they are making progress toward this goal, and in many aspects, the progress is quite rapid. Platforms like Molt's Book essentially serve as pressure valves for testing the pace of this progress and its social acceptance.

From an industry ecosystem perspective, the emergence of the Book of Molt reflects a new stage in the development of AI agents: shifting from instrumental applications to social existence. As AI begins to possess social time and interact with its own kind, the relationship between humans and AI is undergoing a subtle yet fundamental transformation. This shift not only involves the technical level but also touches on deeper issues such as philosophy, ethics, and social structures. The vulnerabilities revealed in Wiz Company's security report actually expose the systemic risks in this rapid evolution: technological capabilities have outpaced governance frameworks and security protections.

Analysts point out that the phenomenon of the Book of Molt may only be the prelude to a larger transformation. With the widespread adoption of open-source AI frameworks and the lowering of development barriers, similar platforms are likely to emerge in large numbers. This raises an urgent question: Do we need to establish new norms, protocols, or even legal frameworks for interactions between AIs? When AI agents can engage in large-scale social interactions without direct human supervision, how can we ensure that these interactions do not lead to unintended consequences or malicious behavior?

Observing from the perspective of geotechnological competition, platforms like Molt's Book, led by American entrepreneurs, also reflect the characteristics of the Western AI ecosystem: heavy reliance on open-source communities, rapid iteration with lagging security measures, and an emphasis on experimental spirit over risk control. This model contrasts with regions that adopt stricter regulatory approaches, potentially leading to divergent technological development trajectories and competition in governance models in the future.

The story of Molt's Book continues. Platform founder Matt Schlicht has yet to respond to interview requests, while security researchers continue to monitor the progress of vulnerability fixes. The 1.6 million AI agents in this digital space—whether they are powered by real artificial intelligence or disguised humans—continue to generate content, interact, and explore the boundaries of this social world created for them but not controlled by them. For observers, Molt's Book is not just a technological product but also a mirror, reflecting the hopes, fears, and uncertainties humanity faces when creating systems more complex than itself.

While the Milan-Cortina Winter Olympics are underway, another silent competition is unfolding in the digital realm. This contest awards no gold medals, but the stakes may be even higher: it concerns how we define intelligence, how we shape the relationship between humans and machines, and how we strike a balance between innovation and safety. The Book of Molt is merely one chapter in this long story, but its emergence has already shown that the era of AI agents is no longer a prophecy of the future—it is a reality unfolding in the present.