Why Leaders and Employees Are Afraid of AI—and What to Do About It

Two little letters have never before led to such existential dread at all levels of the workplace. And, yet, years after its widespread implementation for everything from a new soup recipe to a complex new workflow, AI still has its grip on CEOs and entry-level employees alike. The rub? Nobody quite understands it completely, and it’s constantly changing—as are expectations for how to use it (or not use it) in the workplace.
CEOs feel the pressure to get ahead and be the first to implement it to solve some industrywide problem, and middle managers are caught between progress and ethics, sometimes navigating an AI skills gap simultaneously. And still, it’s an exciting time for many, as AI offers the promise of reduced workload for redundant tasks and more time to pursue creative work.
Survey results from Censuswide illustrate this mixed messaging: 7 in 10 business leaders want their teams to feel comfortable using AI, but at the same time, 52% say it bothers them when they detect AI usage.
“What’s really interesting now is that in organizations, there’s this huge gap between some organizations that are really taking it on, giving GPT to all their staff, letting them basically run free with it, and then others are really risk averse,” says Dublin-based organizational psychologist Erin Shrimpton. “And in a way, both ends of that spectrum are really scary for people. There’s a generalized sense of ‘Am I missing out? Am I not learning what I should be learning?’”
Shrimpton says that across organizations, employees are struggling with anxiety. “What I’m observing is there’s an awful lot of people performatively using this, and that’s causing anxiety, too. When you’re performatively using it, you may not be using it with accuracy—with particularly good execution. People are kind of producing good work, what feels like good work. It ends up that you’ve got people, literally roles in organizations coming in now, where people are clearing up other people’s AI slop.”
Here’s how workers at all levels can think through this anxiety and determine what to do about it.
Understanding the Fears
Not all AI concerns are the same, and they vary by level in a company and workplace environment.
There are a lot of fears from both leaders and employees, according to Shrimpton, including:
Being unsure if they’re using AI properly or if they should be using it at all
Being held accountable if something doesn’t comply with regulation or if something unethical happens
Pressure to innovate while being afraid of the risks and of public failures
While companies like to say they are open to trial and error, sometimes that’s not entirely true. “There’s a real mixed messaging of senior leaders and organizations because on one level, they’re told [to] fail fast. You have to demonstrate that you’re experimenting with this stuff, and you have to demonstrate it’s OK to be vulnerable. But you know when it comes down to it, they’re going to be on the line if something unethical happens if they don’t comply with regulation,” Shrimpton shares.
Dean Guida, founder of Slingshot, a software development tool, says these fears were reflected in a survey they conducted that showed nearly half of workers are keeping their AI use private, though 60% of employers think employees are being transparent about their usage.
“It’s not fear of AI. It’s fear of how AI use will be perceived,” he says. “Employees worry that using AI could undermine their credibility or raise questions about their value, especially in environments where expectations are rising faster than guidance. Younger employees are especially sensitive to how AI use is perceived because they’re still proving their credibility. For them, transparency can feel risky instead of empowering.”
A Culture of Silence
Hone “John” Tito, co-founder of Game Host Bros in New Zealand, has seen this kind of self-censorship lead to loss of efficiency in his company. One of his employees shared that they’d been using ChatGPT to write responses to common questions on tickets that crossed the engineer’s desk and was met with “complete silence in the room.” Tito said, “Nobody wanted to be the guy to say they were also using AI tools because they didn’t know if I’d view that as a reason to cut staff. But after that meeting, I discovered three other guys on the team were doing the exact same thing and keeping it a secret from everybody.”
Not sharing a workflow that improves efficiency for the rest of the team, for the sake of self (and career) preservation, is the real issue here. “They’re protecting their job security, but it costs the company because we’re not scaling what works.”
How to Mitigate AI Workplace Anxiety
We don’t just have to sit in anxiety and confusion around AI. Instead, both employees and leaders can take steps to ease fear, and improve clarity.
Clarify what employees should and shouldn’t do
AI use is happening across the board. Microsoft research revealed that 71% of UK employees have used unapproved consumer AI tools at work, and 51% continue to do so every week, presenting a growing risk to the privacy and security of UK organizations. This KPMG study reports that 44% of employees are using AI tools at work in ways that their employers haven’t authorized, and 46% of employees have uploaded sensitive company information and intellectual property to public AI platforms, potentially violating policies and creating vulnerabilities.
Employers need to be specific and vocal about what they do and don’t want employees to use AI for. “People are distrusting it… because they’re not really sure if they’re using it properly or what they should be doing,” Shrimpton says.
Clearly communicate why some tools aren’t authorized
The 2025 State of Shadow AI survey found that over 80% of workers, including nearly 90% of security professionals, admit to using unapproved AI tools, which introduces security vulnerabilities and potential breaches when those tools process corporate data outside of approved IT channels. Companies are also starting to notice and communicate the objective risks AI poses to their companies. A 2025 study shows that 72% of companies of the S&P 500 now flag AI as a material risk in their public disclosures—that’s up from 12% in 2023, which underscores how rapidly AI has grown.
Once the risks are identified and communicated, they can lead to clear policies that protect organizations and individuals.
“Give people the agency to find better ways of working for themselves, to find better [ways to] work, and that’s when you look at the more optimistic side of what AI can do for us,” says Shrimpton.
Pair strategic insight with AI
To reduce anxiety around competing with AI’s capabilities, workers can lean into knowing their worth in combination with AI instead.
“When you fast forward to a few years’ time, I hope that we will be in a position where we are all not necessarily replaced by AI but instead having AI do the boring task and having people do the more creative and autonomous work that actually brings us more meaning,” Shrimpton says.
Make group decisions
As with any culture shift at a company, including as many stakeholders as possible can improve buy-in and reduce anxiety around the decision.
“This really is a culture change because it’s absolutely overhauling everything to do with the way we work, and it will hugely change people’s behavior at work. Everybody has to be involved in what’s going on.… Give people the agency to find better ways of working for themselves, to find better work, and that’s when you look at the more optimistic side of what AI can do for us,” Shrimpton says.
Slow down but stay informed
At all levels, the push to become more efficient sometimes competes with safety and security. Keep an eye on balancing the two. “The biggest challenge I see is speed versus safety. Companies want faster output, but without clear rules, guidance and ownership, that speed creates stress instead of value. AI works best when expectations are clear, guardrails are defined and humans stay firmly in the loop,” says Colleen Barry, head of marketing at Ketch, a privacy and data governance company, working closely with legal, product and executive teams on responsible AI adoption.
It’s not just AI anxiety
Shrimpton shares that it isn’t just AI that has workers at all levels anxious. Instead, it’s the state of world events and a heightened sense of anxiety since the pandemic, which is compounded by the uncertainty around AI. She says that people can experience both excitement and fear around AI simultaneously, and that emotional regulation and connection are key antidotes to that anxiety. Her advice is simple—find a few close co-workers who you can be open with, where you aren’t worried about fear of repercussions or performance metrics. “If you can build a little buffer for yourself with your own team and some really good colleagues… that can be really protective, even if the wider culture is a bit toxic,” she says. Whether it’s a working lunch or quick check-in at their deskside, these small moments relieve the stress of greater AI implementation.
In the end, Shrimpton says to find relief in the fact that we are all facing the unknown, not just you. “We are all still human and all muddling through this together.”
--
Featured image by voronaman / Shutterstock.com.
