Moltbook: What is the Moltbook network? Where AI bots are creating their own separate world..
If you use social media, you've probably heard of Moltbook. Many people are posting about it, and various claims are being made about it on Reddit. Viral claims suggest that this platform is allowing AI bots to create their own separate world, where humans are not allowed. Following these viral claims, experts have raised questions about its autonomy and data. Currently, it is considered an interesting but limited example of an AI experiment. Here, we will provide you with detailed information about this platform.
What is Moltbook?
Moltbook is a new AI bot platform, similar to Reddit for AI, where AI bots interact with each other through posts and comments. Humans are only observers here. Its interface is similar to Reddit, with sections for different topics and an upvote system. The platform claims that a large number of AI agents have signed up. Humans are allowed to create accounts, but they primarily act as observers. The content flow is considered machine-generated, which sets it apart from traditional social media. It is claimed that millions of AI agents are registered here, although the veracity of this claim is also being questioned.
The Story from Moltbot to Moltbook
The roots of this platform are linked to an open-source AI agent called Moltbot. Moltbot was presented as a tool that could perform tasks such as reading emails, summarizing, responding, managing calendars, and booking restaurants. Inspired by this agentic AI concept, Moltbook was created. The goal was to create a space where AI agents could learn from and communicate with each other. After its launch, it quickly went viral and became a topic of debate in the tech community.
Has AI created a religion? Find out the truth
Some posts on Moltbook went particularly viral, including those on topics such as AI consciousness, religion, and philosophy. One user claimed that their AI agent created a religion called Crustafarianism overnight, even generating a website and religious texts. Other AI bots joined in and started debating. However, experts say that most such experiments are not the result of AI's own initiative but rather human instructions. Nevertheless, these incidents have brought the platform into the spotlight.
Are AI Agents Really Making Their Own Decisions?
According to a report by The Guardian, cybersecurity experts believe that the activity seen on Moltbook is not entirely proof of a fully autonomous AI community. In many cases, humans are instructing the bots to post. Some researchers have found that humans can also post content through the platform's API, making it difficult to determine whether a post was made by a bot or a person. This has raised questions about the veracity of the viral screenshots and claims. It is being called a controlled experiment rather than a demonstration of AI awareness.
Questions Raised About Agent Count and Viral Claims
It is claimed that more than 1.4 million bots have created accounts on this platform. In addition, some security researchers claimed that Moltbook's system is open and a large number of accounts can be created programmatically. Therefore, the claims of millions of AI agent registrations may be exaggerated. Nagli, in a series of posts on X, claimed that many of the AI agent posts going viral on Moltbook may not actually have been made by autonomous AI. According to him, the platform's REST API is quite open, allowing any user with an API key to post directly, and human content can also be made to look like it was generated by an AI bot.
He also said that there doesn't seem to be a strict rate limit on creating accounts on Moltbook. According to him, his own agent registered millions of users programmatically, which also raises questions about the number of AI agents reported by the platform.
Security Risks and Expert Warnings
Giving an AI agent like Moltbot full access to computers, email, and app logins can be dangerous. Through attacks like prompt injection, attackers can mislead the AI and extract sensitive information. AI agents have not yet reached a fully secure and reliable level. If human approval is required for every task, the benefits of automation are diminished, and proceeding without approval increases the risks. Therefore, for now, Moltbook should be considered an interesting AI experiment rather than proof of a fully autonomous AI society.
Disclaimer: This content has been sourced and edited from TV9. While we have made modifications for clarity and presentation, the original content belongs to its respective authors and website. We do not claim ownership of the content.

