SocietyWork Algorithmic Inequalities: How AI Hiring Tools Replicate Old Workplace Biases

Algorithmic Inequalities: How AI Hiring Tools Replicate Old Workplace Biases

AI-powered hiring tools are deeply embroiled in exacerbating existing inequalities in the labour market.

Organisations globally are increasingly incorporating artificial intelligence (AI) hiring tools to filter, evaluate and shortlist candidates for jobs – initially these systems were touted as a solution to reducing administrative burden, accelerating efficiency and obliterating human prejudice from recruitment. However, a growing body of research suggests that instead of neutralising bias, AI-powered hiring tools are deeply embroiled in exacerbating existing inequalities in the labour market. This trend is particularly detrimental for women whose professional trajectories include career breaks – pauses in formal paid employment, often taken for caregiving, elder care, childbirth or other familial responsibilities – as well as those with unconventional, non-linear resumes. 

Unprecedented rise of AI in hiring 

Today AI has become deeply embedded in the global recruitment workflows. AI-driven recruitment in its nascent stages in the early 2000s included Application Tracking Systems (ATS), i.e., keyword-based resume screening to filter applicants based on specific qualifications and job descriptions. The advancements in machine learning (ML) and natural language processing (NLP) transformed resume parsing; AI platforms developed capabilities in understanding contexts, skills and experience levels in resumes. AI-powered job-matching algorithms matched candidates with positions based on historical hiring trends. Subsequently, by the late 2010s, AI-driven chatbots automated candidate engagementand pre-screening processes. Predictive analytics supported companies in anticipating workforce demands using historical data, industry trends and attrition patterns, then combined with AI sourcing to match the best candidates, enabling more strategic, data-driven recruitment. This marked a crucial shift for AI from passive tool to active determinant in the recruitment process. 

AI
Source: Canva

A recent survey by LinkedIn revealed that 70% of Indian recruiters are using AI to tap into “hidden talent”, assess candidates’ skills and accelerate the hiring and onboarding processes. Around 80% of respondents echoed the opinion that AI made it easier to assess a candidate’s skills, and 76% thought it helped streamline the tedious hiring processes overall. Companies are adopting the AI infrastructure to automate CV screening, match resumes to job descriptions and even hold initial AI-led video interviews. While the fundamental intent may be positive – attempting to eliminate the bias and subjectivity of human judgement – reports have revealed a concerning trend: the core data and criteria used by AI systems mirror historical patterns of exclusion and discrimination. 

How AI bias affects women more 

AI tools do not operate in a vacuum. MIT Sloan Professor Emilio J. Castilla refers to this as the “paradox of algorithmic meritocracy”. The majority of AI hiring models use machine learning algorithms that are trained on historical (existing) human resources data such as past resumes, performance outcomes and hiring decisions. If those historical trends were likely shaped by flawed human assumptions, containing discriminatory patterns (for instance, hiring fewer women and a lack of diversity in senior positions), then AI learns to associate “successful” candidates with features linked to those biased outcomes. Thus exposing the ethics and credibility of the supposedly ‘neutral and unbiased’ software.  

AI tools do not operate in a vacuum. The majority of AI hiring models use machine learning algorithms that are trained on historical (existing) human resources data such as past resumes, performance outcomes and hiring decisions. If those historical trends were likely shaped by flawed human assumptions, containing discriminatory patterns, then AI learns to associate “successful” candidates with features linked to those biased outcomes

This phenomenon was most noticeable when it was found that the AI recruitment tool developed by Amazon (which was trained on the past 10 years of hiring data dominated by male candidates) was penalising resumes containing women-associated terms, as in “women’s chess club captain” or “women’s college”. After a lot of public outcry, Amazon was forced to disband the recruitment tool. In another instance, several companies, including Goldman Sachs and Unilever, used HireVue’s speech recognition algorithms to assess candidates’ spoken English proficiency; however, research uncovered that these algorithms disadvantaged non-white and deaf candidates. There is cultural bias as well, as some AI tools have downgraded resumes from candidates who studied at historically Black colleges and women’s colleges because those institutions’ data wasn’t fed into the predominant white-collar pipelines.

Career Breaks Misinterpreted as Negative Features 

Women are statistically more likely than men to take career breaks due to caregiving responsibilities. LinkedIn’s report uncovers that women are 63.5% more likely to list career breaks on their profiles. Interestingly, women from countries with more inclusive policies, such as Sweden, Germany and France were more transparent about the break (over 50%), while women from the Global South appear apprehensive to list the gap (20%). Personal goal pursuit and professional development were the most common break types for men, ranging from 6 to 12 months. Whereas a pause of six months to several years is common amongst women, especially in regions where social support around parental leaves and childcare is limited. The report also revealed that career breaks caused a hindrance to women rejoining the workforce. AI systems view consistent employment as evidence of commitment, reliability and competence, and gaps, regardless of context, can be viewed as negatives. 

While research and global data specifically demonstrating how AI rebukes career breaks is limited, industry research strongly supports that women with career gaps are less likely to get shortlisted for roles they qualify for at par with their male counterparts (without the employment gap). Labour scholars argue that AI resume ranking models can favour uninterrupted career progressions and penalise “non-linear” resumes – a structural disadvantage for many women. On a surface level, this may appear as an objective evaluation, but it is a replication of broader hiring prejudices in historical recruitment traditions, but on an even larger scale. As these systems are promoted as “data-driven” and shrouded by an air of neutrality, their decisions are harder to challenge. 

Favouring linear profiles and algorithmic bias

AI recruiting tools rely on skills-based matching – identifying keywords that align with the job description. This can potentially level the playing field for non-traditional applicants by emphasising skills rather than pedigree. LinkedIn, for instance, claims to shift hiring from “pedigree and titles” to demonstrable skills, but these claims are not backed by the required careful calibration which ensures that AI does not undervalue experiences that don’t fit established, linear templates. 

Industry research highlights the risks associated with AI hiring. Zhisheng Chen’s study on algorithmic discrimination in hiring finds that while AI supports efficiency, it often reproduces biased outcomes based on race, gender and other characteristics found in training data.

Industry research highlights the risks associated with AI hiring. Zhisheng Chen’s study on algorithmic discrimination in hiring finds that while AI supports efficiency, it often reproduces biased outcomes based on race, gender and other characteristics found in training data. Another study analysing large language models (LLMs) used in hiring evaluations relieved the cultural and linguistic biases in ranking interview transcripts, as the Indian applicants received relatively much lower scores than their British counterparts, even when anonymised. This implies that AI systems inadvertently favour Western linguistic and communication norms (such as accent and tone), leading to systematic disadvantage for non-native candidates. These biases can lead to less diverse hiring outcomes and, at some places, filter out qualified candidates from the initial hiring stages themselves. 

A 2026 UK report by the City of London Corporation discovered that mid-career women, especially those with five to ten years of clerical experience, were being overlooked for positions in tech and financial services because of rigid automated screening processes that did not account for career breaks. The report also revealed that these women were at higher risk of losing their jobs to automation than their male counterparts. 

Women are already disproportionately affected by unfair, stereotypical hiring practices, alongside having to face invasive questions from potential employers on their plans to get married and to “start a family”. Women spend additional hours after finishing their productive, paid work to engage in unpaid domestic and care work, which is unaccounted for. The added layer of AI mirroring the existing barriers only complicates the situation further and necessitates significant changes. 

Researchers from the University of South Australia suggest that “AI alone cannot fix the biases”; incorporating equality-orientated algorithms without structural context and oversight would do little for diversity. AI developers and employers must address this by presenting intersectional training datasets which include diversified geographical, demographic and professional trajectories. Career breaks, offbeat, non-conventional roles and experiences need to be acknowledged. AI can potentially be used to augment human judgement; skilled HR professionals must interpret and verify AI recommendations against contextual information that cannot be encoded by algorithms. Organisations must be held accountable to be more transparent about the algorithmic criteria; candidates should have insights into how hiring decisions are made. Just as countries are increasingly developing policies to fight misuse by AI and have equal opportunities (recruitment) laws, there must also be clear standards and regulations around fairness and anti-discrimination in AI hiring.  

References: 

https://www.theguardian.com/business/2026/feb/04/women-tech-finance-higher-risk-ai-job-losses-report
http://ojs.aaai.org/index.php/AIES/article/view/36703/38841
https://www.nature.com/articles/s41599-023-02079-x
https://economicgraph.linkedin.com/content/dam/me/economicgraph/en-us/PDF/gender-gaps-in-career-breaks.pdf
https://www.businessinsider.com/hirevue-uses-ai-for-job-interview-applicants-goldman-sachs-unilever-2017-8
https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/
https://mitsloan.mit.edu/ideas-made-to-matter/ai-reinventing-hiring-same-old-biases-heres-how-to-avoid-trap
https://cardozolawreview.com/automating-discrimination-ai-hiring-practices-and-gender-inequality/
https://hr.economictimes.indiatimes.com/news/workplace-4-0/recruitment/over-70-indian-recruiters-turning-to-ai-to-find-hidden-talent-report/127885466
https://explore.hireez.com/blog/history-of-ai-in-recruitment/


About the author(s)

Simran Dhingra is a recent graduate from Geneva Graduate Institute. Her research interests lie at the intersections of gender, peace, and migration. Her work examines how digital infrastructures reproduce power hierarchies, shape vulnerabilities, and influence policy responses at multilateral and institutional levels.

Leave a Reply

Related Posts

Skip to content