The Value of Machine Unlearning for Businesses, Fairness, and Freedom

Our work as data scientists is often focused on building predictive models. We work with vast quantities of data, and the more we have, the better our models and predictions can potentially become. When we have a high-performing model, we continue to retrain and iterate, introducing new data as required to keep our model fresh and free from degrading. The result is that the model’s performance level is largely retained and we, therefore, continue delivering value for users. 

But what happens if restrictions around a data set or individual data point are introduced? How then do we remove this information without compromising the model overall and without kicking off potentially intense retraining sessions? A potential answer that is gaining interest, and that we would like to explore, is machine unlearning.