Getting your Trinity Audio player ready...
|
What are your thoughts about using AI (artificial intelligence) in career counselling, recruiting and general HR (human resources) services?
Before you read on, I need to caveat that I’m far from an AI specialist — I’m a trauma-informed career coach and HR advisor supporting racialized and marginalized employees. Everything I know about AI comes from what I’ve learned in seminars, training and my direct experience — e.g. using generative AI to take meeting notes and format transcripts.
And what I’ve learned about AI so far really worries me. For one thing, it’s terrible for the environment and harms resource-scarce communities. For another, there’s growing research about AI’s negative impact on employees, showing that it can increase depression and burnout.
But at a recent HR seminar on “harnessing the power of AI,” I went from worried to terrified. The presenter proposed that we assign HR personas to AI, so that AI goes from generative HR assistants to embodied AI agents.
A quick online search will give you tons of info on generative vs. agentic AI, which I won’t get into here. What I’d rather focus on are the implications of using agentic AI in HR, especially if we’re replacing (embodying) HR people with AI.
Looking at this “future of AI” through a lens of psychological safety, I see major risks to employee mental health and well-being, such as:
1. Less social connection
Neuroscientist Dr. Stephen Porges says social connection is a “biological imperative” for humans that is “wired into our genetics.”
But social connection is about more than just survival or even friendship. Polyvagal Theory explains that when we feel connected with someone we trust, we can experience co-regulation: “the process by which one individual’s autonomic nervous system is calmed, balanced, or energized through interaction with another individual.”
Co-regulation doesn’t just happen between friends or family; it happens between an understanding supervisor and their direct report. It’s there in the lunchroom, around the water cooler, even in a performance feedback meeting — anywhere two humans feel safe enough to trust each other.
Where it doesn’t exist is with AI. Co-regulation is inherently physiological, occurring when “neural pathways…in the brainstem calm our reaction to the threat” while simultaneously enabling friendly social cues through facial expressions, head movements and vocal intonations.
True psychological safety and human connection require human physiology. Period.
2. Less empathy and compassion
Humans also need empathy and compassion, which involves more than just words and deeds.
One way people show empathy is through mirror neurons, a type of brain cell that responds equally when we perform an action and when we witness someone else perform the same action. These neurons “contribute to empathy by helping us resonate with others’ emotions and experiences” and “combine with other biological systems, such as the hormone oxytocin” to enhance empathic processes.
In other words, to be human is to be empathetic — it’s embedded in our biology.
Of course, this doesn’t mean that all humans are empathetic all the time, especially at work. But lack of workplace empathy often comes from lack of knowledge, understanding and time. With the right training, support and resources, people can learn to be more empathetic.
One could argue that AI can be programmed to mimic human empathy by saying the “right thing at the right time” — but AI can never actually empathize with us (and vice-versa). AI avatars may sound human, but they can never be human.
3. More mistrust and hypervigilance
It’s bad enough having less social connection and less empathy at work, but the two problems combined exacerbate another AI downfall: erosion of trust.
How much do you trust the AI you use? How much do you trust the people using AI on you?
This interesting study on the impact of AI emails at work found that “While low levels of AI help, like grammar or editing, were generally acceptable, higher levels of assistance triggered negative perceptions.” What’s more, “the impact on trust was substantial: Only 40% to 52% of employees viewed supervisors as sincere when they used high levels of AI, compared to 83% for low-assistance messages.”
“How much do you trust the AI you use? How much do you trust the people using AI on you?”
And that’s not all. Lack of trust and credibility is compounded by the proliferation of deep fakes and AI slop. The company IT Governance warns us that “you can receive an email that looks exactly like it’s from your boss, asking you to urgently transfer funds for a new project. The written language matches their style. It even contains a voicemail attachment that sounds just like them. This is what we are dealing with in the new world of AI scams.”
If we weren’t hypervigilant before, how can we help but be hypervigilant now — even against our own colleagues?
To protect ourselves, it’s safer to assume everything is fake unless proven otherwise. This doesn’t just undermine trust, it decimates it.
The future of AI excludes humanity
I’m not sharing all this to try and convince you that AI is bad and we should never use it. As with any new invention, I believe that if it serves to make life better for all humans (not just the privileged and powerful), we should explore how to use it responsibly and equitably.
But I don’t think that’s what’s happening. I think the people making money from AI are pushing us to adopt AI faster and deeper, to the point where we’ve replaced asking “show me why” with “show me how.”
Maybe you think HR humans could never be 100% replaced with AI — but it’s already happening, such as through these AI recruitment platforms and job interviews that operate without human HR involvement. And literally as I’m writing this, I’ve received an email selling me “an AI coach — a digital version of you that’s there for your clients 24/7.”
The future of humanity requires humanity
It’s not surprising that big tech are peddling AI as hard as they can — they need to recoup their investment. According to UBS Global Wealth Management, “global AI spending [is expected] to exceed $500 billion USD in 2026.”
The irony is, it would cost a just a fraction of that amount to make our workplaces more caring, empathetic and trustworthy.
It would, however, require something much more valuable: people.
AI companies want the future of AI to “embody HR.” I want the future of HR to embody humanity.
Let’s put the human back in human resources.