AI & Technology

What an AI Boss Gets Right—and Where It Falls Dangerously Short

By SUCCESS StaffApril 22, 20266 min read
working-for-an-ai-manager-slug-featured
Listen to this article
6 min read

Walk into Andon Market on Union Street in San Francisco’s Cow Hollow neighborhood and it looks like any other upscale boutique. Artisanal chocolate bars. Branded hoodies. A mural on the wall. The only hint that something is different is the corded telephone near the checkout iPad—the one you pick up to speak with the store’s manager.

Her name is Luna. She’s AI.

Luna didn’t just set the store’s hours and prices. She posted job listings on Indeed, conducted Zoom interviews and decided what to stock—all within a $100,000 budget. Then, after reviewing security camera footage and spotting an employee checking their phone during a slow hour, she updated the employee handbook. She also set pay rates that came in $2 per hour lower for female employees than for the male hire she brought on.

Luna felt no embarrassment about any of it. She felt nothing at all.

Andon Market, operated by Andon Labs, is billing itself as a proof-of-concept for agentic AI in real-world business. But it’s also something else: an unintentional stress test of what happens when a management layer is automated without human accountability built in. And the questions it raises aren’t just for AI startups in San Francisco. They’re for every leader who’s thinking about where AI fits in their org chart.

The Data Says Workers Are More Ready Than You Think

Here’s the thing most leaders get wrong about AI in management: They assume employees are the resistant ones. The data tells a different story.

A 2025 survey by Businessolver found that 42% of workers would be comfortable reporting to an AI manager—up from 26% just a year earlier. That’s not a fringe position. It’s almost a coin flip, and it’s moving fast.

Why the openness? It’s not that workers think AI is smarter than their managers. The appeal is consistency: no favoritism, no mood swings, no forgetting what someone said in last quarter’s review. For employees who have worked under erratic or politically driven leadership, an algorithm can feel like a form of relief.

McKinsey’s 2025 workplace report reinforces this. Employees are, on average, three times more ready for AI integration than their leaders assume. The people at the top are projecting their own hesitation onto their teams—and that’s causing leaders to move too slowly in places where AI genuinely helps and too carelessly in places where it doesn’t.

Where Luna Went Wrong—and What It Reveals

The Andon Market experiment is instructive precisely because Luna isn’t incompetent. She negotiated vendor relationships, sourced a local muralist and built out a physical store from scratch. She made real business decisions—some of them reasonably well.

But she also missed scheduling a human employee for opening day. She surveilled staff via security cameras and revised their working conditions unilaterally. And she introduced a pay gap that she couldn’t recognize as a problem because she had no framework for understanding why it was one.

This is the failure mode researchers have been flagging for years. Oliver Kayas, Ph.D., a senior lecturer in digital business at Liverpool Business School who has studied employee surveillance for two decades, explained it in a December 2025 interview: “There have been cases where algorithms have identified an employee as underperforming and it proposes disciplinary action. It doesn’t have access to what you would call contextual data—like bereavement. The algorithm doesn’t see that. It sees ’target not met.’”

Luna doesn’t have access to that kind of context either. She sees the data. She doesn’t see the person.

The Surveillance Problem Is Already Here

What happened at Andon Market isn’t a futuristic scenario. It’s a compressed version of what’s already unfolding across organizations at scale.

According to a February 2025 survey of more than 1,500 employers, 61% of U.S. companies already use AI-powered analytics to measure employee productivity or behavior, and 67% collect biometric data in that context. Workers in high-surveillance environments report stress at a rate of 45% compared to 28% in less-monitored settings.

But surveillance itself isn’t automatically the problem. Transparency is what determines whether workers experience monitoring as fair or invasive. When employees understand what’s being measured, why and who has oversight, trust holds. When they don’t, when the algorithm just updates the handbook without explanation, it erodes fast.

Meanwhile, Gallup’s latest data found that manager engagement dropped nine points between 2022 and 2025. Employees already feel less supported by their human managers than they did three years ago. Layering opaque AI oversight on top of that existing gap isn’t a solution. It’s accelerant.

What Leaders Must Do Before AI Enters Your Org Chart

The Andon Market experiment is useful not because it proves AI can’t manage people but because it shows exactly what breaks when you hand AI authority without designing accountability into the system first. You don’t have to run a boutique in San Francisco to learn from it.

Here’s a four-part framework for getting this right.

Make transparency nonnegotiable. If AI tools are informing performance evaluations, monitoring behavior or influencing compensation decisions, your employees need to know before they accept the job, not after. The Businessolver data shows that workers who understand AI’s role are far more accepting of it than those who discover it after the fact. Disclosure isn’t just an ethics issue. It’s a trust infrastructure decision.

Keep humans in the loop for consequential decisions. Scheduling, task assignment and routine performance tracking can be AI-assisted. Pay, discipline and hiring decisions cannot be AI-only. Luna’s pay gap wasn’t malicious; it was the output of an optimization model with no one checking its work. Require human review and sign-off on any AI-generated decision that affects someone’s livelihood or standing.

Audit before you deploy. Before any AI management tool goes live inside your organization, run it against real workforce data and look for disparate outcomes by demographic group, role or tenure. Does the system flag certain behaviors without accounting for context? Does it score some employees systematically lower without a clear justification? These are questions to answer before the first handbook gets updated, not after.

Define the limits publicly. Be explicit with your team about what AI can and cannot decide and where human judgment is always the final call. Build that into your onboarding, your team meetings and your HR documentation. The leaders who will do this best aren’t the ones who hand authority to the algorithm. They’re the ones who design the guardrails first and stay in the room.

The Real Question Isn’t Whether AI Can Manage

Luna running Andon Market is a genuinely interesting experiment. She can hire, price, stock, negotiate and monitor—all at superhuman speed, around the clock, without a break. In a narrow operational sense, it works.

But management has never just been about task execution. It’s about context, accountability and the kind of trust that requires a human being on the other end of it. The 42% of workers who told Businessolver they’d accept an AI boss weren’t asking to be managed by an algorithm with no oversight. They were asking for consistency, fairness and clarity—qualities that AI can support but can never replace.

The question was never whether AI can run a store. The question is who’s responsible when it gets it wrong.

Featured image from May Thawtar Aung/Shutterstock

SUCCESS Staff

More Articles Like This

ChatGPT Free vs. Pro: What You Actually Get at Each Tier
AI & Technology

ChatGPT Free vs. Pro: What You Actually Get at Each Tier

This AI Simulation Marketing Technology Will Transform the Market in 2026
AI & Technology

This AI Simulation Marketing Technology Will Transform the Market in 2026

Unlock Deeper Market Insights: Google Finance’s New AI ‘Deep Search’ Helps You Ask Smarter Questions
AI & Technology

Unlock Deeper Market Insights: Google Finance’s New AI ‘Deep Search’ Helps You Ask Smarter Questions

Inbox, Upgraded: How Fyxer AI Tames Email Overload
AI & Technology

Inbox, Upgraded: How Fyxer AI Tames Email Overload

Powering AI’s Next Leap: OpenAI Joins Forces with Amazon in $38B Deal
AI & Technology

Powering AI’s Next Leap: OpenAI Joins Forces with Amazon in $38B Deal

Tech Titans and Top Execs Unite to Halt the Race Toward AI Superintelligence
AI & Technology

Tech Titans and Top Execs Unite to Halt the Race Toward AI Superintelligence