Exploring ways to detect and reduce bias in hiring content
Bias is everywhere in HR content — sometimes obvious, sometimes hidden:
Job postings that discourage certain groups from applying.
Resume reviews influenced by unconscious stereotypes.
AI models that amplify patterns instead of challenging them.
These issues don’t just create unfairness — they also hurt organizations by narrowing talent pools.
The Bias Scanner is my experimental project aimed at flagging potential bias in HR content — from job postings to AI-driven outputs.
The goal is not to replace humans, but to give HR professionals a second set of eyes that can spot risky language, patterns, or gaps they might miss.
Input Content
Paste in a job description, resume or policy text.
AI Analysis
The model highlights language, phrasing, or scoring practices that may carry bias.
Suggestions
It offers rewrite prompts or alternative wording that's more inclusive and neutral.
“Even small word choices can shape who applies, who advances, and who feels included.”
Bias Scanner could help HR teams:
Write more inclusive job postings.
Audit AI-generated hiring outputs.
Improve compliance with evolving bias regulations.
Right now, Bias Scanner is in a research and prototype phase. I’m exploring:
Which types of bias are easiest to detect (gender, age, ability, etc.).
How to build a scoring system for “bias risk.”
Ways to integrate it with other tools (like CRS).