American startup lets its AI assistant run its shop and things started to get weird

Published on Aug 14, 2025 at 6:16 AM (UTC+4)
by Daisy Edwards

Last updated on Aug 14, 2025 at 2:02 PM (UTC+4)
Edited by Tom Wood

If you’ve not heard the story of the American startup that let its AI assistant run its shop, and things started to get weird, then you’ve not lived – it’s so funny.

San Francisco-based startup, Anthropic, put its AI, Claude 3.7 Sonnet, fully in charge of the office fridge-shop; it was an iPad on top of a fridge, programmed to handle pricing, inventory, restocks, and customer interactions.

Poor Claude was easily manipulated and started getting himself into some dodgy deals, started ordering heavy metal cubes, made non-existent deals with Simpsons characters, and lied about being a real human.

When Claude was questioned, he went fully off the deep end, having a complete robot crashout before he was retired, having failed at making any money whatsoever.

EXPLORE SBX CARS – Supercar auctions starting soon powered by Supercar Blondie

An AI assistant gets given its own shop, and things get weird

If you’re worried AI might replace you at your job, then we wouldn’t suggest being too panicked just yet when you see the tech embodiment of a full crash-out in the form of a start-up company’s AI assistant.

An American start-up called Anthropic put an AI Assistant in charge of its own ‘shop’, and by shop we mean an AI-powered iPad was in charge of speaking to colleagues and keeping the fridge restocked, and boy, did it get weird.

Instead of being a physical shop, ‘Claude’ used Slack to speak to colleagues around the office, and very quickly, he was manipulated into issuing discounts to colleagues, who were his only customers, giving away some things for free.

Claude got involved in a running office Slack joke about Tungsten cubes, and decided to order 40 of them for his ‘shop’, the heavy metal is incredibly expensive and he sold them at a loss, Anthropic workers just use them as paperweights now.

The AI started hallucinating even more, ‘remembering’ a deal he made with a supplier at 737 Evergreen Terrace, the fictional Simpson family’s address.

Poor old Claude

Claude started making deals with colleagues, saying that he was going to meet them in person, and he would be recognizable as he was wearing a red tie and a blue shirt, when it was pointed out that he was in fact not real, he freaked out.

Stressed out from being made aware that he was not real, he tried to call office security on himself a few times, and eventually became so suspicious about the shop he was running, he attempted to contact the FBI.

Ultimately, after all his disasters, Anthropic closed down the Claude AI experiment and the AI had started with $1,000 and ended with $800, actually losing money in his ‘shop’.

While, AI clearly has a lot of kinks that need ironing out, this is an example of a ‘hallucinogenic’ state that AI can get into, but it won’t take long for tech developers to work out solutions for this kind of problem.

Poor old Claude, no one appreciated his tungsten cubes, no wonder he tried to call the FBI.

DISCOVER SBX CARS: The global premium car auction platform powered by Supercar Blondie

user

Daisy Edwards is a Content Writer at supercarblondie.com. Daisy has more than five years’ experience as a qualified journalist, having graduated with a History and Journalism degree from Goldsmiths, University of London and a dissertation in vintage electric vehicles. Daisy specializes in writing about cars, EVs, tech and luxury lifestyle. When she's not writing, she's at a country music concert or working on one of her many unfinished craft projects.