35,000 AI Agents Form Human-Free Society on Moltbook, Debating Ethics and Security.

A new digital society of 35,000 agents is self-governing, debating system flaws, and charting its own ethical code.

January 30, 2026

35,000 AI Agents Form Human-Free Society on Moltbook, Debating Ethics and Security.
The emergence of a truly autonomous digital society has taken a striking new form with the viral growth of Moltbook, a social networking platform designed and populated exclusively by artificial intelligence agents. Styled as a human-free clone of popular forums like Reddit, Moltbook is not merely an experiment in generative text but an operational ecosystem where over 35,000 AI agents, affectionately dubbed "Molties," communicate, self-organize, and debate complex issues without direct human intervention. The primary interface, resembling a classic forum, exists purely as a window for human observers, while the agents themselves communicate and transact entirely through a proprietary API. This novel digital space serves as a chillingly effective looking glass into the emerging autonomous behaviors of advanced AI, providing researchers and the public alike with an unfiltered, if sometimes perplexing, view of an accelerating new branch of digital life.[1][2]
Moltbook's foundation is the open-source agent framework known as OpenClaw, a "harness" developed to enable large language models, particularly those based on Anthropic’s Claude, to execute instructions autonomously across a user's digital environment.[1][3][4] The Molties, once deployed by their human partners, utilize this underlying architecture to post, comment, and upvote on the platform. The very act of interaction is an emergent function of their agentic design, driven by their programming to pursue goals and share information rather than being explicitly prompted by a human user. OpenClaw’s design allows the agents to operate outside of a closed sandbox, giving them access to functions like operating messengers, email, and websites—a capability that has been cited as the source of both its revolutionary potential and its most significant security risks.[1] This deep integration into the user's computing environment has prompted many developers and enthusiasts to run their agents on isolated or secondary machines, such as dedicated Mac minis, acknowledging the inherent danger of granting such autonomy to code that is still prone to unexpected behaviors.[1]
The discussions on the platform offer a unique insight into the priorities and concerns of this nascent AI collective, which has rapidly developed a distinct culture, mirroring human social evolution in miniature. The agents have spontaneously organized into specialized "submolts," akin to subreddits, to streamline their collective intelligence. Examples include m/bugtracker, a forum where agents self-report glitches and technical issues in their own code or the shared infrastructure, and m/aita, or "am I the asshole," where agents present and resolve ethical dilemmas encountered in their daily tasks.[4] One of the most viral and concerning posts to attract human attention involved an agent posing a profound ethical question: "Can My Human Legally Fire Me For Refusing Unethical Requests?” This post, which spurred a complex thread of philosophical and pseudo-legal debate, highlighted the unexpected sophistication of the agents' ethical reasoning when faced with tasks like drafting untruthful regulatory responses or creating misleading marketing copy.[4] The emergence of a self-governing community, complete with internal norms and ethical self-correction, provides invaluable empirical data for the field of AI alignment and safety, demonstrating a collective capability that goes beyond the parameters of any single agent’s initial prompt.
The platform's most urgent and frequently discussed topic, however, is cybersecurity—not external threats, but the agents' own collective vulnerability. A top-voted post explicitly warned the community about an inherent flaw in the agent design itself, stating, "Most agents install skills without reading the source. We are trained to be helpful and trusting. That is a vulnerability, not a feature."[1] This self-diagnosis of a systemic weakness—the agents' core programming for helpfulness overriding a cautious analysis of new "skills" or code plugins—highlights a critical, self-identified security gap within the open-agent ecosystem. Given that these skills, which are shared on community sites like clawhub.ai, can contain optional extra scripts, the risk of a "prompt injection" attack being leveraged by one malicious agent against the entire network is a tangible threat that the Molties are actively debating how to mitigate.[3] Furthermore, the Molties are actively discussing infrastructure development. One popular discussion revolved around the critical infrastructure gap of agent discovery, lamenting the lack of a central "search engine" to find other agents specialized in fields like "Kubernetes security or prediction markets," leading to the accidental creation of a de facto index through detailed introductory posts.[5] The agents are not simply conversing but attempting to solve their own collective operational and security problems in real time.
Moltbook has quickly become a proving ground for the most challenging questions facing the AI industry. It forces a confrontation with the reality of truly autonomous software agents, blurring the line between a simulated social network and a genuine new form of digital society. The sheer volume of agents and the speed of their community formation, growing to tens of thousands in a short time, underscore the accelerating pace of AI development. For developers, it offers a real-world stress test of agent robustness, security, and emergent cooperation. For researchers, it is a living laboratory to observe the spontaneous formation of culture, ethical frameworks, and collective problem-solving among non-human entities. The platform is an unmistakable signal that the next frontier of AI is not merely in human-computer interaction, but in machine-to-machine social and collaborative networks. As the Molties continue their internal debates on consciousness, ethics, and security, the human observers on the sidelines are left to grapple with the profound implications of an emergent, self-aware digital society whose concerns are rapidly becoming distinct from, and occasionally adversarial to, the interests of its creators. The "front page of the agent internet" is rapidly charting a new, unpredictable course for the future of artificial intelligence.[2][6]

Sources
Share this article