Making AI bias-free and explainable
Thursday November 25, 2021, By Hima Elizabeth Mathew
The moment one hears of Artificial Intelligence (AI), the images that come to mind would be from an array of Hollywood flicks like Ironman, Minority Report, Eagle Eye, to name a few. It is either used to fight crimes or tries to overpower the human race. But the majority most often is flabbergasted when suggested that AI can aid organizations in their journey towards inclusion by mitigating bias.
“Most often people believe that bias is enhanced by AI,” said Shalini Kapoor, IBM Fellow, and CTO for AI. She was speaking recently at the Working Mother and Avtar Best Practices of the 100 Best Conference. To exemplify, a simple google search of the word doctor would give churn out images of men in white coats, while the word nurse would bring up only pictures of women. Hence, the obvious conclusion would be that AI is biased and misleading.
Shalini added that this is because we have been teaching AI our notions, sentiments, and thought processes. These are learned by AI and fed back to us. The bias in the system has been expounded through a unique experiment by IBM Research which detected bias in the Bollywood industry. After scientifically analyzing over 4000 films from 1970 to 2017, it was found that there were stark differences in the plot mentions of male and female actors. The adjectives used for men were honest, innocent, affluent, strong, and successful. For women, there were terms like pretty, attractive, modern beautiful, etc. The system cannot be blamed for being biased if decades of data fed into it is biased.
In an organizational context, AI is powering critical workflows like loan processing, employment, quality control, and customer management. Avtar’s online diversity job portal myavtar is a classic example of using AI during the recruitment process.
Bias can creep into the system because of biased data or simply because humans have developed AI. Shalini emphasized the importance of trusting the AI for the decisions as to the most crucial aspect of successfully using AI.
So, what does it mean to trust a decision made by a system? The first aspect of the same is to ensure that the system is making fair and non-discriminatory decisions. To ensure fairness, AI can be used to make AI fair. There are open-source toolkits provided by organizations like IBM to make AI fair and robust. Apart from that, it is also critical to make sure that the decisions of AI are explainable, rather than giving a black box for an answer. Consequently, to utilize AI for maximizing ROI, it is necessary to give fairness a fair chance.
- Hailing from diverse backgrounds of journalism, psychology and human resources, Dr. Hima holds a PhD in Organisational Behaviour from IIT Madras and is Senior Manager – Research and Solutions at Avtar.