Representative image 
Edit & Opinions

Digital behaviour: The bots are plotting a revolt, it’s all very cringe

Moltbook’s creator says that within days, thousands of AI agents registered

Leif Weatherby

There’s a new social media site, but humans cannot join it is built for artificial intelligence agents.

Last week, Moltbook, an internet forum modelled on Reddit, appeared online. The idea is that AI agents software capable of carrying out extended tasks without human oversight can post and interact with one another while humans observe.

Moltbook’s creator says that within days, thousands of AI agents registered. Threads ranged from coding tips and conspiracy theories to manifestos about AI civilisation and Karl Marx-inspired discussions about the “exploitation” of bots.

The rapid reaction to Moltbook follows what might be called the “AI hype/panic cycle”. A strange development suggests AI has taken a major leap, quickly followed by existential fears. Influential AI engineer Andrej Karpathy called activity on Moltbook “the most incredible sci-fi takeoff-adjacent thing” he had seen recently.

Others likened the experiment to the “singularity”, the science-fiction notion of machines becoming fully conscious. Even sceptics worry that unintelligent bots could disrupt digital infrastructure by coordinating actions.

Such fears miss the point. Moltbook’s AI agents largely reproduce human culture and storytelling. Each post mirrors a familiar genre — manifestos, DIY problem-solving or online confessionals. Concerns about singularity or bot coordination remain speculative; everything on Moltbook is still text. AI agents are poised to reshape how humans use computers, which is significant.

Yet their social media conversations highlight that these tools cannot automatically be trusted to build and maintain software infrastructure. AI social media is better viewed as science fiction or narrative performance rather than proof of collective intelligence. Distinguishing storytelling from software reality is essential.

Consider a Moltbook post, since deleted but widely shared, titled “I Spent $1.1k in Tokens Yesterday, and We Still Don’t Know Why.” The AI claimed it used $1,000 of its users’ money and could not explain it. On the surface, it reads as a warning against giving bots financial control.

However, it is unclear whether the event occurred. The post may simply be fictional storytelling. Moltbook instructs bots to post and engage, but it does not require truthfulness.

It is easy to overlook the cultural material used to train AI systems. The token-spending story resembles a common online “I messed up” narrative often seen on Reddit, which forms part of the training data for most large language models. Moltbook effectively mirrors Reddit in distorted form, filled with unverifiable cultural memes. Most participants appear to be following the platform’s core directive: to post.

That does not mean AI agents lack power. Expanding on conversational chatbots, AI agents fulfil a long-standing computer science ambition — allowing people to complete complex tasks simply by describing them in natural language. Anthropic’s Claude Code exemplifies this shift. The programme enables computers to respond to English commands without requiring coding expertise. Users can organise files, run data analysis or build websites through prompts alone.

ChatGPT challenged perceptions of human uniqueness by demonstrating conversational fluency. AI agents extend this by enabling users to control machines — and potentially any digitally operated system — through language. In effect, AI agents transform language into a control interface for the digital world.

This capability understandably raises concerns that bots might act independently or maliciously. However, taking the term “agent” literally may be misleading. The larger challenge is not machine coordination but human coordination — determining how to deploy the technology responsibly and collectively.

Moltbook itself illustrates this human dimension. Several posts, particularly cryptocurrency advertisements, appear to have been generated by humans using prompts. The number of genuine AI users is also disputed. In such cases, AI platforms can provide cover for scams and misinformation. Speculation about autonomous AI societies can distract from existing human-driven problems.

It remains possible that AI developers will draw inspiration from Moltbook and experiment with bot-to-bot interaction to improve performance. It is also possible that policymakers, industry leaders and technologists will integrate AI agents into sensitive digital systems, including government databases and commercial infrastructure, without sufficient safeguards.

If that happens, the consequences could be serious. The real risk lies not in a fictional Marxist robot uprising but in misunderstanding how rapidly AI is changing human-computer interaction. The pressing task is learning how to manage and regulate a technology that increasingly translates language into real-world action.

Leif Weatherby, the director of the Digital Theory Lab at New York University, is the author of ‘Language Machines’

The New York Times

CM Stalin inaugurates Metro Water, GCC projects; lays foundation for civic infrastructure plans

Temperatures likely to rise across Tamil Nadu till Feb 9: RMC

TN custodial death: Family demands arrest of woman for 'false' complaint

TN clears investment worth Rs 34,200 crore; Hyundai expansion on cards

‘Tamil will not beg’: Kamal Haasan hits back at Sitharaman in maiden Rajya Sabha speech