Bias vs. Fairness vs. Explainability in AI

Over the last few years, there has been a distinct focus on building machine learning systems that are, in some way, responsible and ethical. The terms “Bias,” “Fairness,” and “Explainability” come up all over the place but their definitions are usually pretty fuzzy and they are widely misunderstood to mean the same thing. This blog aims to clear that up.

Bias

Before we look at how bias appears in machine learning, let’s start with the dictionary definition for the word:

“Inclination or prejudice for or against one person or group, especially in a way considered to be unfair.”

Look! The definition of bias includes the word “unfair.” It’s easy to see why the terms bias and fairness get confused for each other a lot.

The Importance of Defining Fairness for Decision-Making AI Models

Defining fairness is a problematic task. The definition depends heavily on context and culture and when it comes to algorithms, every problem is unique so will be solved through the use of unique datasets. Algorithmic fairness can stem from statistical and mathematical definitions and even legal definitions of the problem at hand. Furthermore, if we build models based on different definitions of fairness for the same purpose, they will produce entirely different outcomes. 

The measure of fairness also changes with each use case. AI for credit scoring is entirely different from customer segmentation for marketing efforts, for example. In short, it’s tough to land on a catch-all definition, but for the purpose of this article, I thought I’d make the following attempt: An algorithm has fairness if it does not produce unfair outcomes for, or representations of, individuals or groups.

How to Overcome AI Distrust With Explainability

AI has permeated our lives in nearly every aspect. We rely on it to get accurate search results, to enable conversational marketing, ad personalization, and even to suggest medical treatment.

In spite of headways in AI and it’s widespread use, there’s considerable distrust in it. Such distrust and unease arise from popular media representations of AI in movies where robots threaten to overthrow the human race. 

Regulating ML/AI-Powered Systems for Bias

Siri and Alexa are good examples of AI, as they listen to human speech, recognize words, perform searches, and translate the text results back into speech. A recent purchase of an AI company called Dynamic Yield by McDonald’s — which analyzes customer spending/eating habits and recommend them other food to purchase — has taken the use of AI to the next step. AI technologies raise important issues like personal privacy rights and whether machines can ever make fair decisions.

There are two main areas where regulation can be helpful.