
To avoid
Unfair Outcomes
An action could be found to be discriminatory if it has a disproportionate impact on one group of people.
An example is the one of predictive policing. As it is often based on determining individuals’ threat levels by reference to commercial and social data, it can improperly link dark skin to higher threat levels. This can lead to more arrests for crimes in areas inhabited by people of color.
Tools for Fairness
Aequitas – Bias and Fariness Audit Toolkit
Aequitas is an open source bias and fairness audit toolkit that was released in 2018. It is designed to enable developers to seamlessly test models for a series of bias and fairness metrics in relation to multiple population sub-groups.
Microsoft Fair Learn – Reductions Approach to Fair Classification
As part of Microsoft Fair Learn, this is a general-purpose methodology for approaching fairness. Using binary classification, the method applies constraints to reduce fair classification to a sequence of cost-sensitive classification problems. Whose solutions yield a randomized classifier with the lowest (empirical) error subject to the desired constraints.
What-if Tool from Google
The What-if Tool from Google is an open-source TensorBoard web application which lets users analyse an ML model without writing code. It visualises counterfactuals so that users can compare a data-point to the most similar point where the model predicts a different result. In addition, users can explore the effects of different classification thresholds, taking into account constraints such as different numerical fairness criteria. There are a number of demos available – showing how the different functions work on pre-trained models.
IBM 360 degree toolkit
IBM 360 degree toolkit contains a comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets at the pre-processing and model training stages.