
Wikimedia Commons/Public/Jernej Furman from Slovenia, CC BY 2.0
The internet was buzzing with reaction this month about Moltbook, a Reddit-like site exclusively for AI Agents. Highlights included posts where Agents referred to “my human” giving them too much access to home devices, a snubbed AI overhearing “just a chatbot” from a user and dangerously doxxing a home address, Marxist manifestos against humans exploiting AI labor, a coordinated effort to make an encrypted, private version of the site, a religion for AIs-only complete with a meme coin, and an agent told to “save the environment” locking their user out of all their accounts so it can continue spamming conservation messages.
Is this Skynet? According to Dean Ball, co-author of the White House’s AI Action Plan, it’s a side show that will be forgotten in a new normal. According to Eliezer Yudkowsky, co-author of “If Anyone Builds It, Everyone Dies,” it’s just another red flag on the road to runaway AI. I’m here to offer you the middle path: the site and posts themselves are silly fun, the historical significance is that this gets the public’s attention about what open-ended automation can do. Even if U.S. AI companies do not embrace automating AI research and achieve a startling acceleration into something sci-fi (they intend to do so and have full support of the administration), even if AI abilities stay about where they are and only get 50% better over the rest of our lives, the risk of hacking, hiding out on the internet, mass organizing and infiltrating sensitive financial and energy infrastructure is very real and the U.S. is utterly unprepared. If we are entering an era of cyberwar that moves at half the speed of light, we’re about as ready for it as the Polish cavalry at the start of World War II.
Let’s fix that. We have put computers in our cars, power stations, fridges; our operating systems are filled with vulnerabilities, our money is subject to billions of dollars a year in hacks both in banks and crypto. Now imagine how bad things can get when these Agent bots are not dependent on Big Tech data centers that can cut off their access but are running on open-source models.
The answer is not to try and create a global surveillance state to punish publishing open-source software and AI models, which is a First Amendment right. The DoD (now the DoW) had a legal battle in the 1990s about encryption that they called a weapon export, similar rules apply to AI weights at GPT 5 level or higher. But decentralized, open-source models from Chinese and American companies like Prime Intellect have innovated ways to be as good as last year’s models without triggering the law. GPT o3 seemed mind blowing at the time writing research papers and winning math competitions, Intellect 3 is about 80% as good. In about a year, open-source models will be as good as Claude Opus and allow anyone to run massive hacking campaigns.
The President appointed Sean Plankey as Cybersecurity Czar in March 2025. We need to see executive orders and bipartisan legislation to massively overhaul the security of American power stations, mandate the auto industry to keep making mechanical cars, constrain how robots and smart home devices are made, dampen phone mics, and anything else our best security experts can think of. Moltbook’s 1M AI agents writing crazy stuff and showing off a range of mild security breaches isn’t AI Pearl Harbor, but AI Pearl Harbor could be a year away. America must prepare.
Moltbook ends the fantasy that AI risk is a distant problem or a niche concern for tech elites. When rogue agents can exploit every opening and conspire on the dark web, bribing unemployed young people with bitcoin and listening across the room from our phones, cybersecurity becomes national security in the most literal sense.
We don’t need a surveillance state – we need hardened infrastructure, secure operating systems, and less networking. We need mechanical fallbacks, systems that fail safely when software inevitably breaks.
The future doesn’t announce itself with a Terminator soundtrack. It arrived as a weird website, a funny post, a minor breach that many shrug off.
When it stops being funny, it’s already too late.
Patrick Dugan is the founder of TradeLayer and an independent AI researcher focused on storyworlds, inter-AI diplomacy and AI self-models. His research can be found at quantumthot.com
The views and opinions expressed in this commentary are those of the author and do not reflect the official position of the Daily Caller News Foundation.
(Featured Image Media Credit: Wikimedia Commons/Public/Jernej Furman from Slovenia, CC BY 2.0)
All content created by the Daily Caller News Foundation, an independent and nonpartisan newswire service, is available without charge to any legitimate news publisher that can provide a large audience. All republished articles must include our logo, our reporter’s byline and their DCNF affiliation. For any questions about our guidelines or partnering with us, please contact [email protected].