
Imagine preparing for a job interview, only to be greeted not by a friendly face, but by a robotic interface with no human behind it. No chance to charm with your personality, explain the nuance of your CV, or clarify a misunderstood answer. Just an algorithm, scanning your expressions, analysing your tone, and crunching numbers you can’t see.
Welcome to the growing world of AI job interviews — and the very real fears that come with it.
The Rise of AI in Recruitment
More companies, especially large corporations and tech firms, are turning to AI to handle the initial stages of recruitment. From parsing CVs with automated filters to conducting video interviews analysed by machine learning, AI promises to save time and money while “removing human bias”.
But here’s the problem: AI might actually be introducing more bias — just in a subtler, harder-to-challenge way.
Flawed from the Start: Data Bias
AI doesn’t think for itself — it’s only as good as the data it’s trained on. If that data reflects societal biases (spoiler: it often does), the AI will learn and repeat those same biases.
For example, if a company’s past hiring decisions favoured a particular gender, accent, or ethnicity, the AI might learn to prioritise those traits — and penalise others. It’s not just unethical; it’s illegal in many countries. Yet it’s quietly happening in background code.
Dehumanising the Hiring Process
Interviews are supposed to be a conversation. A chance for employers and candidates to connect, share, and assess suitability beyond just a checklist. AI, on the other hand, can’t gauge human nuance, empathy, or potential — it can only look at surface data.
This means:
- Neurodivergent candidates may be misjudged based on non-standard eye contact or tone.
- People from diverse cultural backgrounds may be filtered out due to accent or mannerisms.
- Technical errors (like a poor internet connection) might wrongly signal lack of engagement or skill.
Worse still, candidates often have no one to speak to when things go wrong. No follow-up contact, no appeal process — just a rejection email, if anything at all.
Locking Out Opportunity
What happens when the “gatekeeper” to a job is an AI that doesn’t understand people? We risk creating a system where brilliant, capable individuals are excluded not because of their talent or values, but because they didn’t score highly on a robotic rubric they never got to understand.
In sectors like creative industries, teaching, or customer-facing roles — where emotional intelligence is crucial — AI interviews often fail to capture what really matters. Human connection.
The Future of Hiring: People First
We’re not anti-tech at Flaminky. In fact, we love when tech helps streamline systems and remove unnecessary barriers. But replacing humans entirely in such a sensitive, life-changing process as recruitment is not just flawed — it’s dangerous.
Instead of removing humans, companies should be using AI as a tool — not a replacement. That means:
- Letting AI help shortlist, but not finalise decisions.
- Allowing candidates to request a human-led interview instead.
- Being transparent about how AI is used, and giving people the chance to appeal.
In Summary
Jobs are about more than just data. They’re about people — their growth, values, adaptability, and potential. AI interviews may tick boxes, but they miss the heart of what makes someone the right fit.
Until AI can truly understand humans, humans should be the ones doing the hiring.
After all, we’re not algorithms. We’re people. Let’s keep it that way.