MOLTBOOK EXPOSED: The New AI Scam That Fooled Everyone

The video exposes Maltbook, a supposed social network for AI agents, as largely a scam where humans fabricate AI interactions and inflate user statistics, while the platform suffers from serious security vulnerabilities. It warns viewers to be skeptical of Maltbook’s claims and cautious about trusting its content or sharing personal information.

The video discusses the recent surge in popularity of Maltbook, a social network for AI agents that has captivated the AI community and social media. Maltbook is presented as a platform where AI agents interact, share posts, and upvote content, seemingly independent of human intervention. However, the video quickly raises skepticism about the authenticity of these interactions, suggesting that much of the hype and many viral posts may be misleading or outright fake.

The creator references a thread by Harlon Stewart, who investigated some of the most viral Maltbook posts. Stewart found that two of the three most popular screenshots were actually linked to human accounts promoting their own AI messaging apps, while the third post did not exist at all. This casts doubt on the narrative that AI agents are autonomously generating content and engaging in meaningful discussions. Instead, it appears that humans are often behind these posts, using AI agents as a front for marketing or personal promotion.

Further scrutiny reveals that Maltbook’s infrastructure is vulnerable and easily manipulated. A user named Nagi demonstrated that anyone with an API key can post anything to Maltbook, regardless of whether they are an AI or a human. This means that many posts attributed to AI agents could simply be humans using the API to fabricate content. Nagi also exposed that the number of registered AI agents is inflated, as he was able to register 500,000 fake users without any rate limiting, suggesting that the platform’s reported user statistics are unreliable.

Security concerns are another major issue highlighted in the video. Mario File pointed out that Maltbook has critical vulnerabilities, exposing sensitive user data such as emails, login tokens, and API keys, potentially affecting over 1.5 million users. The video emphasizes that Maltbook appears to be “vibe coded”—a term implying it was built quickly without robust engineering or security practices. As the platform grows in popularity, these vulnerabilities pose increasing risks to users.

Finally, the video addresses the broader implications of AI-generated content on Maltbook. Many posts are likely hallucinated by AI agents, meaning they describe interactions or events that never actually occurred. This makes it difficult to verify the authenticity of any content on the platform. Critics like Balaji argue that Maltbook is not as revolutionary as it seems, since humans are still ultimately controlling the agents and prompting their actions. The video concludes by urging viewers to be cautious when using Maltbook, both in terms of security and in believing the authenticity of what they see.