SocietyScience & Technology How Unbiased Is Artificial Intelligence?

How Unbiased Is Artificial Intelligence?

Artificial Intelligence discriminates due to implicit biases in coding and it grows by studying the patterns on the internet.

Editor’s Note: This month, that is February 2020, FII’s #MoodOfTheMonth is Feminism and STEM. We seek to challenge the exclusionary biases in the field, by inviting various articles on the works of women, queer individuals, and people from marginalised communities in STEM, the ways in which the sciences are biased, stereotypes and misconceptions in STEM, and the experiences of people from marginalised identities in the field. If you’d like to share your story, email us at maduli@feminisminindia.com.


From our smartphones to job screenings, Artificial Intelligence (AI) is everywhere. AI has entered every aspect of our lives and it is increasingly becoming normalised. Although still in its nascent age, AI has made its way into law, medical sciences, and human resources.

AI more often than not, owing to coding by humans, imbibes sexist and racist ideas. Human prejudices translate into AI; As AI learns automatically and grows, the discrimination is amplified. t picks up the stereotypes and prejudices of humans from books, articles, and social media online. Hence, one can say it is the automation of bias. 

We view Artificial Intelligence as something that is very neutral in nature; this is a very dangerous assumption, given that AI isn’t really unbiased. 

To answer the question whether Artificial Intelligence is truly discriminatory or not, let us take an example of Amazon’s system for screening applicants. The system learnt that male candidates were preferable when it was trained to observe patterns in the resumes submitted to the company over a span of 10 years. The system began down marking resumes which featured the word ‘women’s’ because most resumes came from men and since the AI was trained on the basis of historical hiring decisions, which favoured men over women and so learned the same.

AI does not only discriminate due to implicit biases in coding but also because the AI grows by studying the patterns on the internet or the data that is fed into it.  Yet even when the engineers recognised this problem and actively excluded ‘women’s’ as a term the AI penalised, the system still showed traces of implicit biases towards terms like ‘executed’ and ‘captured’ which mostly featured in male resumes.

Also read: Facebook’s Community Standards Suppress A Marginalized Voice Again

The appeal of AI systems is the idea that they can make impartial decisions or are absolutely neutral, free of human bias. This has been proven to be wrong multiple times. AI is also spreading into the healthcare industry, but even data in health is not free of biases against women.  It’s because they learn by looking at the world as it is, not as it ought to be. Many times language translation machines have assumed doctors to be ‘male’ even when a gender-neutral term is used in the native language or no gender is mentioned. These systems are more likely to associate positive terms with Western names as opposed to names from other parts of the world. A study in Boston University created an algorithm which labeled women as homemakers and men as software developers, using Google News Data. 

Artificial intelligence (AI) does not only discriminate due to implicit biases in coding but also because the AI grows by studying the patterns on the internet or the data that is fed into it.

In a study by American Civil Liberties Union, it was found that Amazon’s facial recognition system incorrectly matched numerous members of congress to mugshots (the number of people of colour in this list is disproportionately high). Highlighting that Artificial Intelligence is not completely ready for being introduced into law enforcement systems. 

Apart from the tangible and practicality issues attached to AI, something that has bothered me personally for the longest of times is the fact that default settings are female voices which is problematic in itself. AI Assistants’ responses are more often than not apologetic and subservient; a UN Report finds that these assistants with generically female names perpetuate gender biases. 

In the Indian context given a large number of biases within our society which are themselves seen in policies and speeches alike, a translation of the same into Artificial Intelligence is dangerous. The lack of representation of various communities and ethnicities in tech makes the occurrence of these biases more likely. States like Uttar Pradesh, Rajasthan and Uttarakhand are already using software for facial recognition along with digital criminal records

The primary problem occurs during the collection of data. Wherein either the data is not reflective of the status quo, for instance when an algorithm is fed only data of one race, making facial recognition AI inherently bad in recognising various races.

The primary problem occurs during the collection of data. Wherein either the data is not reflective of the status quo, for instance when an algorithm is fed only data of one race, making facial recognition AI inherently bad in recognising various races. To illustrate, a study by MIT ‘Gender Shades’ revealed that systems by companies like Microsoft and IBM for gender classification showed a high error rate of 34.4% for dark-skinned females as opposed to light-skinned males.  A similar problem occurs when the data in itself is prejudiced. 

It is possible that bias is introduced when the AI is learning the aspects that it has to consider in its analysis, in the aforementioned case of Amazon the AI taught itself to consider the applicant’s gender.

Given that the creation of AI systems is not made with a sensitivity towards being unbiased or an active effort at excluding prejudices, the bias is not eliminated at the creation stage. AI exists in a world without social context or an understanding of ‘fairness’, given that these ideas are very subjective and vary from case to case. Whilst making systems compatible with all contexts, specific ideas are ignored. 

Also Read: Meet Dr. Katie Bouman: The Woman Behind The First Black Hole Image

To help mitigate this issue it is important to have women in the system to point out problems given their lived experiences. However, it should not always be the duty of under-represented groups to push for less bias in Artificial Intelligence. The additional mental effort and emotional input required is an unfair burden to put on the already underrepresented groups. 

This lapse calls for a legal framework in an increasingly technological society as these systems evolving from prototypes and are being put into actual practice . Yet Indian Law which already lags in terms of cyber laws as opposed to international standards will have a lot of making up to do.

Ensuring the removal of biases is a continuous process like is the case with all forms of discrimination. It requires long-term research and investment by multiple disciplines. Google, for instance, is investing time into ensuring their AI is not discriminatory. Recognising that facial recognition and camera were not picking non-white skins adequately, they developed technology to detect the slightest differences in light. They went on to have their home assistants and speakers have insults and slurs hurled at them to be able to check how the AI reacts to these terms and where to fix it. Sorting out AI’s discriminatory behaviour hence requires an active effort from all people involved in the process of creation and testing to recognise and catch onto the problems. 


Feature Image Source: Pixabay.com

Related Posts

Skip to content