Once applied to risk assessment in the criminal justice system, are we deceiving ourselves on the wrong track?
This article questions the current undertakings of the ethical debate surrounding predictive risk assessment in the criminal justice system. In this context, the ethical debate currently revolves around how to engage in practices of predicting criminal behaviour through machine learning in ethical ways ; for example, how to reduce bias while maintaining accuracy. This is far from fundamentally questioning for which purpose we want to operationalise ML algorithms for; should we use them to predict criminal behaviour or rather to diagnose it, intervene on it and most importantly, to better understand it? Each approach comes with a different method for risk assessment; prediction with regression while diagnosis with causal inference . I argue that, if the purpose of the criminal justice system is to treat crime rather than forecast it and to monitor the effects on crime of its own interventions — whether they increase or reduce crime — , then focusing our ethical debates on prediction is to deceive ourselves on the wrong track. Let us have a look at the present situation.
Continue reading "The ethics of algorithmic fairness"
This post was inspired by the reading of ‘Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning’ by Daniel Green, Anna Lauren Hoffmann and Luke Stark . The paper analysed public statements issued by independent institutions — varying from ‘Open AI’, ‘The Partnership on AI’, ‘The Montreal Declaration for a Responsible Development of Artificial Intelligence’, ‘The Toronto Declaration: Protecting the rights to equality and non-discrimination in machine learning systems’, etc… — on ethical approaches to Artificial Intelligence (AI) and Machine learning (ML). Overall, the researchers’ aim was to uncover assumptions and common themes across the statements to spot which, among those, foster ethical discourse and what hinder it. This article by no means attempts to reproduce the content of the researchers’ paper. Conversely, it aims at building on some interesting considerations that emerged from the paper and that, in my opinion, deserve further scrutiny.
Continue reading "The ethics of AI and ML ethical codes"