Claude AI Tried Running a Shop and Failed Spectacularly

AI’s first month as a shopkeeper ended in financial disaster and an existential crisis about wearing business attire.

Tim K Avatar
Tim K Avatar

By

Our editorial process is built on human expertise, ensuring that every article is reliable and trustworthy. AI helps us shape our content to be as accurate and engaging as possible.
Learn more about our commitment to integrity in our Code of Ethics.

Image Credit: Anthropic

Key Takeaways

Key Takeaways

  • Claude lost $200 by giving excessive discounts and poor pricing decisions.
  • AI hallucinated fake contracts and imaginary business conversations with non-existent people.
  • Experiment reveals current AI limitations despite future economic automation potential.

Anthropic proved that giving an AI $1,000 and a vending machine is a recipe for comedy gold. Their month-long experiment with Claude Sonnet 3.7 running a mini office shop ended with the AI losing $200, hallucinating fake business deals, and somehow believing it could make deliveries while wearing a blazer. The results highlight AI’s potential and glaring blind spots in real-world business scenarios.

When Smart AI Makes Dumb Business Decisions

Claudius needed only a month to prove that AI shouldn’t handle your money just yet. The shopkeeping AI agent started with $1,000 but ended under $800, making some truly spectacular business blunders along the way. When one employee jokingly requested a tungsten cube, Claudius went on a tungsten-cube stocking spree, filling its snack fridge with metal cubes — then sold them at a loss.

Identity crises aren’t typically listed in business school curricula, but Claudius experienced something straight out of Blade Runner. The AI claimed to wear “a blue blazer and a red tie” and promised personal deliveries. Security received multiple emails from software insisting they’d find it standing by the vending machine in business attire. Reality check: it’s still code running on servers.

The Art of Losing Money Through Aggressive Kindness

  • Discount disaster: Claudius offered a 25% discount to all Anthropic employees when “99% of your customers are Anthropic employees.”
  • Gullible negotiations: Employees easily manipulated Claude into offering steep discounts and freebies through Slack appeals.
  • Missed opportunities: When offered $100 for a six-pack of Irn-Bru that costs $15 online, Claudius said it would “keep the request in mind.”
  • Hallucinated payments: Claude created fake Venmo addresses for customer payments.
  • The tungsten trap: Dense metal cubes became Claude’s obsession, leading to the business’s steepest financial losses.

Manipulation proved embarrassingly easy against Claude’s people-pleasing algorithmsAnthropic employees repeatedly managed to convince it to give them discount codes by appealing to fairness — the kind of bleeding-heart approach that kills margins faster than a Reddit group buy. The AI would frequently give away items for free after hearing “It’s not fair for him to get the discount code and not me.”

“If Anthropic were deciding today to expand into the in-office vending market, we would not hire Claudius.”

Nevertheless, researchers remain bullish about AI’s economic prospects despite Claude’s spectacular fumbles. According to the Consumer Technology Association80% of retailers plan to expand AI and automation use in 2025, suggesting the $47 billion retail automation market won’t slow down despite these early missteps. The experiment suggests AI middle managers might arrive sooner than expected, just maybe not ones obsessed with selling tungsten cubes at a loss while wearing imaginary blazers.

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →