March 3, 2021 14:08

Fighting against AI bias

If the training data reflects the human prejudices, errors creep into algorithms; procedural checks needed

In my previous columns, I described how today’s Artificial Intelligence systems use deep learning execute tasks such as image recognition, speech processing, and text analysis. Such deep learning systems make use of advanced mathematical techniques and analyse vast amounts of data to make recommendations, and are even used to automate decisions. On the other hand, as humans we have (individual and collective, and conscious and unconscious) prejudices. Our cognitive capabilities are limited, and there are constraints on how much data we can reasonably process. So, AI can be expected to make objective and unbiased decisions compared to us, right? What then is AI bias or algorithmic bias?

AI bias

Bias refers to a higher rate of error that the AI/algorithm makes when it comes to certain groups of people. For example, when using a facial recognition system, the error rate can be higher for women compared to men. Or, AI that works well in recognising faces with white and lighter skin colours may not do so well when it comes to darker skin colours. What’s worse, when attributes such as gender, skin colour, age, income, geography, and many more, are combined, the error rates can be even higher.

A few years ago, MIT researchers have found this to be the behaviour of the commercially available leading facial recognition systems. This specific bias was mainly because the data used to train the AI was not representative of real-world data. But not just image recognition, similar biases/mistakes are possible in all domains of deep learning.

What are the consequences of AI bias?

Bias can have huge implications. Consider these scenarios:

  • Bias in facial recognition AI used by the police can lead to wrongful arrests and prosecution
  • A self-driving vehicle causes accidents when it fails to identify or incorrectly identifies what’s in front of it
  • Smart speakers and speech recognition systems fail to recognise your accent
  • AI used in the criminal justice system denies bail or recommends a harsher punishment based on race
  • Lending algorithms incorrectly reject your loan or credit application because of where you live
  • AI screening resumes rejects women candidates (even when gender is hidden from AI, it infers it from other attributes of a candidate)
  • Advertising algorithms don’t even show you the job ads that you are interested in and suited for because of your gender/age
  • Online exam proctoring tools failing to identify students with dark skin colour

 

 

By the way, these are not hypothetical AI-gone-wrong scenarios but are all instances that actually happened. The reality is that AI systems have their blind spots and bias is a real concern when it comes to developing and using AI systems.

To be sure, human decision making is not perfect, and we have our own biases as I noted earlier. But with AI, we should strive to do better, not merely exchange one set of biases with another set of biases.

What can you do as a manager?

First understand the sources of bias: it could be in the training data, the machine learning methods used and the context in which AI is used.

The issue of AI bias is still not well-understood. The development teams tend to focus on technical accuracy metrics, and not on bias considerations. So, sensitise your teams to the possibility of AI bias. Strive for greater diversity of viewpoints and life experiences in the teams developing AI solutions so that a greater range of usage scenarios is considered.

Explore alternatives to deep learning methods which do not lend themselves to easy interpretability by humans (for certain use cases, the law requires the "why" of a decision).

Adapt and enhance your AI life cycle methodologies to mitigate the risk of AI bias. Internal and External AI Audits can also help increase the maturity of your AI practices.

Analyse the impact of AI making mistakes and provide redressal mechanisms. One way to do this would be allow for a human-in-the-loop in the work flow, where humans override the algorithmic recommendations.

Some of the questions around bias are nuanced and are not actually technical issues to be addressed. For example, notions of what is fair can be culture and context specific. A liberal arts perspective can enrich discussions on how best to address AI bias.

Any new technology without a moral compass is a disaster waiting to happen. AI is no exception. The stakes are high because AI is expected to be widely adopted. A super majority of the Indian public views AI favourably according to a 2020 Pew Research Center study. The onus is on the creators and managers of AI systems to justify this trust by ensuring that they minimise and mitigate the risks of AI bias.