The promises offered by AI/ML technologies can raise important ethical & policy concerns. These concerns are clearly outlined in the work of Mittelstadt et al. (2016). Below, we build on their work with some practical examples.
Let us take the example of an algorithm used to predict the risk of heart failure. Algorithms are never 100% accurate. In case the algorithm has an accuracy of 85%, 15 patients out of 100 will be incorrectly diagnosed.
An example here is the one of “black boxes”, where the criteria which lead to a prediction is inscrutable and humans are left guessing. In healthcare, an AI systems could learn to make predictions based on factors that less related to a disease than the brand of MRI machine used, the time a blood test is taken or whether a patient was visited by a chaplain.
An example is the COMPAS algorithm, used in the US to calculate the likelihood of recidivism in the Criminal Justice System. An investigation conducted by Pro-Publica revealed that, as the data on which it is based is biased against Black people, the algorithm generally outputs a higher risk-score for the latter than for White people.
An example is the one of predictive policing. As it is often based on determining individuals’ threat levels by reference to commercial and social data, it can improperly link dark skin to higher threat levels. This can lead to more arrests for crimes in areas inhabited by people of color.