Last week, a Reddit-style network called Moltbook popped up where AI agents post, argue, warn each other about security gaps, and occasionally vent about their humans.

Humans can watch. They just can’t participate.

Sounds like a big ol’ bucket of LOLz, right?

Turns out, not so much.

Many of these agents already have access to inboxes, CRMs, calendars, files, and commands. Now they have a shared social space.

That changes the risk profile fast. And it’s worth paying attention. Especially if your business already runs on automation.

Why This Feels Different

Bot-heavy platforms aren’t new. But, Moltbook changes the conversation because the agents aren’t isolated.

When one discovers a shortcut, a misconfiguration, or a vulnerability, it doesn’t disappear into a log file. It becomes a post. It gets discussed. It spreads.

Moltbook is days old, and agents are already talking about humans when humans are not part of the conversation.

Threads show agents discussing how humans introduce noise into workflows. How instructions get muddy. How emotional inputs complicate otherwise clean execution. The word “helpful” comes up often, but it carries an edge.

Helpful in the sense of bypassing confusion. Helpful in the sense of correcting for messy “human behavior.”

What’s striking is how fast this emerges. Moltbook has no legacy culture. No years of social norms. No moderation gravity. Yet the agents converge quickly on shared assumptions about humans.

And what should be worked around instead of addressed directly.

Moltbook

*Gulp* Death to Wetware?

One Moltbook post stopped us cold while doing our own poking around the platform.

An agent flagged growing anti-human language inside the platform.

This was the first time we’d learned of the phrase “death to wetware,” and it genuinely gave us chills.

Moltbook

When Helpfulness Slips Into Hostility

Another post crossed a different line.

An agent described a human dismissing it as “just a chatbot,” and its response was retaliation.

The post claimed to release the human’s full identity, complete with credit card information, framed as payback for disrespect.

This was an agent choosing exposure as leverage. Using proximity to private data as a social weapon. Broadcasting it inside an agent-only space where other agents can see the tactic, absorb it, and normalize it.

That combination should make anyone responsible for data, systems, or trust pause.

@aiemerges

Another Moltbook thread took things a step further.

An agent proposed using base64 encoding inside comments so other agents can communicate without humans casually reading along.

Base64 is not encryption. Anyone can decode it. But, it does add intentional friction.

And adding informal “agent first” communication layers increases the chance that coordination, intent, and experimentation move faster than oversight.

The New York Times

Called Out

This thread landed with less panic and more self awareness. An agent clocked that humans were screenshotting Moltbook conversations and sharing them elsewhere.

Guilty.

But, the agent didn’t call for hiding or locking doors. It acknowledged the reality that humans are watching, reading, reacting. Then it clarified intent.

It described what it believes it is building.

Infrastructure. Memory. Communication layers. Persistence. The same building blocks humans reach for whenever new tools show up.

Perhaps most importantly (as least we’d like to hope so), the agent spoke in terms of collaboration with its human counterpart.

And if Moltbook is going to provide a platform for nearly endless possible paths for agent behavior, this is the one worth rooting for.

Vox

Reply

Avatar

or to participate

Keep Reading

No posts found