A few days before writing this, I had been scrolling through the Cognitive Bias Codex, that visual of more than 180 cognitive biases mapped out in a single circle. It is one of those things that appears beautiful but becomes slightly uncomfortable the longer you look at it.
That image and a piece I read about AI in hiring ended up in the same mental folder. On one side, a reminder that our brains are full of shortcuts. On the other side, tools that learn from our past decisions and start helping us decide who even gets into a process. In the middle, my own idea of what diversity and inclusion at work actually means.
How I used to see diversity and inclusion in hiring
If I am honest, my own definition used to be quite simple. Diversity and inclusion in recruitment were mostly about:
- Who hears about a role
- Who makes it into the process
- Who is treated fairly in conversations
So I thought about things like job ads, sourcing routines, who is on the interview panel, how questions are asked, and who gets a real chance to be seen. It was human, visible, and in a way quite tangible. Reading about AI in hiring did not suddenly make that wrong. It just made it incomplete.
What changed when AI entered that picture
AI is showing up in places that feel very operational: CV screening, ranking candidates, helping decide who is a match. On its own, that sounds neutral, almost boring. Of course we use tools. Of course we automate.
But here is what my brain did in the background. If our starting point as humans already includes 180 plus biases, and we then train systems on top of our past behaviour, what exactly are we scaling?
It is also about who might be filtered out before anyone reads their name. And how this whole thing feels for the person on the other side of the screen.
That emotional part matters to me. Not just is this statistically fair, but what does this do to people over time. The candidate who starts to believe they never had a real chance. The HR person who is told to trust the tool but still carries the responsibility if something feels wrong.
Why I care enough to go deeper
During a screening call, a candidate asked me: "Is this a real interview or a data grab?" That question stayed with me.
So I ordered Hilke Schellmann's book The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now. It is an investigative look into how these systems are already used in the world of work, not just in theory.
For me, that is a commitment to sit with the uncomfortable parts, not just the shiny ones. I do not only want the sentence AI can be biased in my head. I want to understand how that bias actually shows up in tools that decide who is worth a callback, who fits a pattern, and who disappears in a filter.
Where I am right now with all of this
I am not writing this as someone who has the answer to how to fix AI in hiring. Because I do not.
Right now I am here: diversity and inclusion used to mean people and rooms for me. Now it also means systems and defaults. I am very aware that every small setting in a system can travel far, emotionally and practically. And I am in the phase of learning deliberately, not just reacting to buzzwords.
If you work with hiring, AI tools, or diversity and inclusion, I am genuinely curious: how has AI changed the way you personally think about fairness in hiring? Not in theory but in your gut, when you look at a process and ask yourself whether it feels right.