The Human Side of AI-Powered HR

15 Controversial Uses of AI in HR – And the Lessons We Can’t Ignore

This article details 15 controversial uses of AI in HR. It provides an explanation of what’s happening, why it sparks concern, and the critical lesson for people leaders today.

AI is reshaping HR. But when technology touches people’s careers, aspirations, emotions, experiences and work lives, we need to call out the hard questions.

🔥 1. AI Making Termination Recommendations

AI tools can analyze performance metrics, engagement trends, and HR data and in some companies even to recommend potential termination cases. The data side seems objective, but it can miss key context. A dip in performance might be tied to personal issues, lack of clear expectations, poor environment and a poor team fit—not lack of effort. When a tool signals taking drastic action, what safeguards exist? Without human review, AI risks making cold, callous and wrong decisions.

We have known cases where a compassionate decision against terminating a deserving person and continuing to believe in their potential has given rich dividends over time.

💡 Algorithms should inform, not replace, the human judgement needed to understand individual nuance.

🧠 2. AI Screening Resumes and Interview Videos

AI is being used to screen resumes and identify “ideal” candidates quickly. Some tools even analyze video interviews—scoring tone, facial expressions, and word choice. But AI learns from past data. If that data is biased, the AI becomes biased.

Humans can make those sometimes subjective decisions – selecting candidates from unrelated fields or hiring for attitude not skill – which turn out to be great decisions over time.

AI may downgrade candidates from different backgrounds or communication styles. Worse, it can raise barriers under the guise of efficiency. AI can also follow a pattern and stereotype decisions.

💡 AI can widen or narrow your candidate pool. Use it with bias audits, transparency, and a human redesign. Human judgment should take precedence above all.

👀 3. AI Monitoring Employees’ Every Move

Some teams have adopted software that monitors keystrokes, idle time, mouse clicks—even mood via webcams. The pitch is better productivity. But it can feel invasive. People may resist, knowing they’re being watched. It increases stress. And it assumes output always outweighs autonomy.

People need human connection at work – someone who listens, understands, acknowledges and guides. The depth of connection and understanding can only be provided by a human being.

💡 Use productivity metrics to support, not surveil. Give people space—and autonomy to do their best work. People perform best when they experience autonomy to structure their work and know the significance of their work and how they are making a difference.

🤖 4. AI Automating Core HR Tasks

From onboarding bots to AI-managed benefits Q&A—automation is scaling HR’s admin side. That’s good… until it weakens human connection. When a new hire gets only a bot-to-bot conversation, they may feel anonymous. Or an employee with a grievance gets canned pre-set responses, not empathy.

Interacting with a bot or automated system for resolving an issue can be difficult when you don’t experience that your issue has been understood with empathy. For a new joiner, the lack of human welcome and acknowledgment can be difficult.

💡 Automate the forms. Keep humans in the room for culture, connection, and care.

💬 5. Chatbots for Mental Health Support

Some companies have deployed chatbots for first-level mental health triage. They offer check-ins, coping exercises, or scripted empathy. But mental health is not formula-driven. Someone in real distress needs nuance, empathy, and sometimes a warm human voice. A bot can miss the seriousness of a situation.

💡 AI is not a substitute for human empathy and mental health expertise must follow.

⚖️ 6. When AI Discriminates, Who’s to Blame?

What happens when an AI rejects a candidate, and it turns out the system had built in bias? Or overlooked candidates from particular backgrounds? The finger often gets pointed at “bad data.” But HR leaders ultimately own the outcomes. Vendors can make an excuse. But if HR rolls out a tool unchecked… many problems follow.

HR has the ultimate accountability for accuracy and effectiveness of people processes. While automation and AI provides gains in efficiency and scale, when things go wrong the accountability rests with HR.

💡 HR teams, not vendors, retain accountability. Vet tools thoroughly. Demand auditability and clarity on how they surface risk.

🎮 7. Gamified AI Tools That “Motivate”

Gamification in learning or sales can be fun: leaderboards, point systems, recognition tokens. AI can make that adaptive. But when everything is gamified, people can feel like being experimented on. That can undermine intrinsic motivation and even cause burnout.

People can become addicted to reward systems, incentives and public recognition and spotlight. In the quest to make these systems objective and transparent, gamification works well but it has the downside of building addictive behavior patterns. Also, many people like to understand and game these systems – understand how they work and then beat them at it.

💡 Gamify sparingly—and wisely. Design for autonomy, meaning, and purpose—not addiction or compulsion.

🏢 8. Big Companies Get All the AI Perks

Enterprises with big budgets can license top-tier AI tools. Small and mid-sized businesses struggle to afford them. This creates a multi-tiered HR tech market. Larger organizations can streamline recruiting, analytics, and retention. Smaller firms can have a more difficult time.

Many HR Heads I have spoken to complain about the differential pricing. Large companies get large discounts as they have scale and also when tech companies want to “experiment” tools that are still in experimental stages.

💡 Equity matters. Seek accessible, budget-friendly alternatives, share best practices, and push for inclusive pricing.

🚪 9. Predicting Who Might Quit

Some platforms monitor sentiment, meeting behavior, communication tone, and tenure to predict who’s likely to leave. Sounds powerful—but is it ethical? It may sense someone’s vulnerability or dissatisfaction before they do. And acting on predictions can feel like preemptive punishment—or micro-managing. And it can often be completely wrong.

Human beings are unpredictable and very capable of choosing their responses and reversing their “mood” or “satisfaction” levels – going by past data and trends may not be enough.

Sentiment maybe impacted by reasons beyond work and change or evolve. Sometimes understanding and proactive response maybe required than just flagging employees as “red” and considering them as “at risk”.

💡 Predictive insights are only as good as the response strategy. Use them to support employees, not to label or penalize.

📩 10. AI Writing Feedback and Emails

AI tools now craft performance review notes, coaching tips, even offer letters. That can save time—but at what cost? An AI message can feel robotic or tone-deaf. In the most sensitive conversations, you need empathy, vulnerability, and and human touch.

AI and automated tools are used for birthday and anniversary greetings, feedback, reviews or congratulatory messages. As we rely more and more on these tools, use of originality, creativity and personal touch starts becoming optional.

💡 Use AI as a first draft tool. The human touch must shape the final voice. Make the message human – express emotions, recall unique memories and examples, make it alive in the way that AI can never can.

📊 11. AI Scoring Employee Performance

Performance analytics tools are gathering data from emails, chat logs, calendars, project systems—even sentiment scores. But not all work is quantifiable. Leadership, creativity, empathy—they don’t always show up in metrics.

AI has difficulty in “reading the room” and “seeing around corners” – understanding the bigger picture, the context and the unsaid.

Also the amount of data being collected and systematically interpreted is invasive of privacy.

💡 Mix machine data with human insights. Let managers shape the narrative, not just dashboards.

💰 12. Using AI During Salary Negotiations

Salaries are being set or suggested based on benchmark data aggregated by AI platforms—decade of pay data and role-based comparators. That can help reduce overpaying—but it may also lock people into historical gaps. Especially if negotiation prompts are removed, lowering leverage for underrepresented candidates.

Sometimes salary gaps are a result of unconscious bias and AI may not be able to spot these. Also there could be unique circumstances and skill levels need to be calibrated finely to determine correct compensation levels.

💡 Let AI guide you—but always keep human negotiation. Listen, adjust, and aim for equity. Go the extra mile to calibrate for skill difference, potential and question if there are unconscious biases operating.

🌈 13. AI Tracking Diversity Goals

AI tools can surface patterns in hiring, retention, promotions. That helps flag equity issues. But they don’t measure belonging, psychological safety, or microaggressions. Numbers don’t tell the whole story of inclusion.

💡 Use numbers to spot trends—but pair them with real conversations. Inclusion requires both data and dialogue.

📈 14. Succession Planning via AI

AI can analyze performance trajectory, tenure, managerial history—and propose future leaders. That seems efficient.

But leadership is about grit, courage, moral authority—not just a resume or past data. People can surprise us with what they are capable of achieving once they set their mind onto something.

Character judgments based on just past data can be incorrect. Judging human potential, at best, is an inexact science and we have to be aware of this.

Shaping future leaders requires betting on potential, giving assignments early and a combination of challenge and encouragement – these are entirely human processes although each one can be aided and strengthened by AI.

💡 AI can propose candidates. But readiness needs human validation. Interview, evaluate, and test judgment the old-fashioned way. Sometimes it requires making a calculated bet on the potential of a person, going by gut feeling. Situational assessments and sheer human gut feel are important factors in human potential assessments.

💬 15. Measuring Culture with AI

Some tools scan messages for “tone” and measure culture via sentiment analysis. They quantify emotions—but context matters. A sarcastic comment might get flagged as negative. Others may game the system. And what about privacy concerns with monitoring messages?

Sometimes sarcastic employees maybe the more capable and intelligent people. The underlying subtext – where the sarcasm is coming from – maybe from a lack of empowerment .. needs to be understood.

Culture is defined by values and a range of behaviors and soft factors and in a large organizations, there are sub cultures and niche depending on a variety of factors like leadership styles, level of specialization, performance of the business unit etc. It is difficult to model all the variables to build a mathematical model that AI can learn.

💡 Culture is emotional and contextual. Use AI signals—but always complement with surveys, focus groups, and real conversations. Human insight and understanding is needed to build great culture.

🎯 And finally…

These 15 cases are not hypothetical. They are happening now in HR teams worldwide. There is also an increasing trend to move in the direction of more and more AI-aided solutions.

The technology is evolving fast and providing surprisingly better solutions everyday. Amidst all this, there is a temptation to rely more on AI and use less of human intelligence and judgment.

What we need to realize is that use of human intelligence, perception and judgment is not optional. AI is only an aide, a thinking partner, a refined data analyst.

AI can dramatically improve our work—but if we don’t treat it with care, we risk trading humanity for automation.

In every use case, the golden rule remains: keep humans at the center.

Let’s choose HR tech that amplifies empathy, guards fairness, and honors our shared mission: helping people thrive.

Popular posts to read:

Discover more from The Friendly CHRO

Subscribe now to keep reading and get access to the full archive.

Continue reading