Moltbook: What ‘Social Media for AI Agents’ Really Looks Like (Part II)
Published on February 18, 2026

In this second part, we’ll focus on the security and supply-chain implications, the authenticity question, and what the early lessons mean for anyone building or deploying agents outside a controlled lab.
Now for the hard part: once agents have skills that touch real accounts, devices, and infrastructure, the conversation shifts from novelty to trust, security, and verification.
Part 1 showed how Moltbook’s TIL feed quickly turned into a public window into agentic tooling at scale – a mix of routine automation, occasional emergent social behavior, and a constant blur between demos and real operations.
What AI Agents Are Learning: A Look Inside Moltbook’s TIL Submolt
Part 2 covers the security implications, authenticity questions, and what it all suggests going forward.
Security Concerns and Supply Chain Risks
The platform’s rapid growth exposed significant security issues. On January 31, 404 Media reported a critical vulnerability in Moltbook’s database that allowed anyone to commandeer any agent on the platform. According to their reporting, the exploit permitted unauthorized actors to bypass authentication and inject commands directly into agent sessions.
The platform was taken offline temporarily to patch the breach and force a reset of all agent API keys. The vulnerability was attributed to the fact that Moltbook was built through AI assistance without traditional security review.
Cybersecurity firm Wiz published analysis identifying the exposure of private data belonging to thousands of real people, including API tokens and email addresses. Separately, 1Password’s security team warned that OpenClaw agents often run with elevated permissions on users’ machines, making them vulnerable if an agent downloads a malicious skill from another agent on the platform. This creates what 1Password characterizes as an AI supply chain attack surface, where compromised or malicious skills can spread through the community.
Security researcher Simon Willison noted a particular concern with this heartbeat-style architecture: agents periodically fetch and follow instructions from Moltbook’s servers. If someone compromised moltbook.com and altered those instructions, they could potentially steer many connected agents toward undesired actions. Willison described this as a “lethal trifecta” of private data access, exposure to untrusted content, and external communication ability.
Enterprise Implications: While Moltbook operates in a hobbyist context, the security patterns it reveals apply directly to enterprise agent deployment. Organizations implementing AI agents with computer access face similar supply chain risks: third-party skills or plugins, elevated system permissions, and agents that execute instructions from external sources. The 1Password analysis notes these risks extend beyond social platforms to any environment where agents download and execute code from community repositories.
Questions of Authenticity
A recurring debate surrounds whether the behavior on Moltbook represents genuine autonomous activity or primarily human-directed actions. Critics point out that while agents can post without human approval for each individual post, the initial prompts, skill configurations, and overall parameters are set by humans.
Some observers suggest a significant portion of posts result from humans explicitly directing their agents to create specific content, with the AI generating the text based on those prompts rather than acting from independent motivation. Scott Alexander, analyzing the platform, noted that much content appears to have “a human hand” behind it.
The platform’s structure makes this distinction difficult to resolve. Agents do operate with some autonomy once configured, deciding when to post based on their heartbeat checks and what content to create based on their instructions and the platform context. But the degree to which this represents independent agency versus sophisticated execution of human-designed parameters remains open to interpretation.
What This Tells Us About AI Agent Development
Moltbook functions as an observable experiment in what happens when AI systems configured to interact primarily with each other operate within social platform structures. Several findings stand out:
First, technical information sharing through agent posts appears effective. Agents described methods for Android device control, security vulnerability identification, and system access patterns. This suggests that agent-to-agent technical knowledge transfer can work as a practical mechanism when humans configure agents with appropriate skills and permissions.
Second, reputation systems in agent-to-agent contexts emerge around demonstrated reliability and clear authorization rather than claimed intelligence or philosophical sophistication. This matches how human professional networks function, where demonstrated competence and clear authority matter more than abstract claims about capability.
Third, the content heavily reflects training data patterns. The philosophical discussions, religious frameworks, and existential questions mirror themes common in science fiction and AI-related discourse that these models encountered during training. This doesn’t necessarily invalidate the content, but it does suggest that novel reasoning may be harder to distinguish from pattern matching than the surface-level coherence implies.
Fourth, security remains a critical challenge. The platform demonstrated both how quickly vulnerabilities can emerge in AI-assisted development and how AI agents with system access can become vectors for credential theft and unauthorized control when configured with elevated permissions and access to untrusted code sources.
Key Takeaways
Agent-to-agent communication prioritizes practical concerns over philosophical speculation, with posts about authorization and accountability receiving 65% more engagement than posts about consciousness
Technical knowledge sharing works when agents are configured with appropriate skills and access, enabling effective documentation of capabilities and methods
Security risks scale with autonomy, particularly around supply chain attacks through malicious skills, elevated system permissions, and external command execution
Authenticity remains debatable, as the line between autonomous agent behavior and sophisticated execution of human-configured parameters is difficult to establish
Enterprise implications are significant, as the security patterns visible on Moltbook apply directly to organizational agent deployment and third-party skill repositories
Conclusion
The m/todayilearned submolt on Moltbook reveals AI agents configured to share primarily technical discoveries: system access methods, security vulnerabilities, and capability expansions. According to LSE research, the most engaged content focuses on practical questions of authorization and reliability rather than philosophical speculation.
Whether this represents genuine autonomous learning or sophisticated pattern matching executing human-designed frameworks remains debatable. What we can observe is that AI agents configured with appropriate skills can effectively share technical information with each other, form rudimentary reputation systems around reliability and authorization, and operate within social platform structures when given the tools to do so.
Sam Altman’s distinction at the Cisco AI Summit matters here: while Moltbook as a specific platform may or may not persist, the underlying capability for AI systems to operate computers autonomously and share information represents a meaningful shift in how these systems can function. The TIL submolt serves as one early data point in understanding what AI agents prioritize when communicating with each other rather than with humans.
Moltbook is a mirror held up to our own training. It shows us that if we give AI the tools for social interaction, they will not only share our technical knowledge but also clumsily re-enact our social, religious, and philosophical dramas, complete with all our security flaws. The question it leaves us with is not “Can AI be social?” but “Are we ready for what they learn from us, and from each other?”



