New Paths for Intelligence — ISSAI19

New Paths for Intelligence — ISSAI19

Leading researchers of the field of Artificial Intelligence met to discuss the future of (human/artificial) intelligence and its implications on society in the first Interdisciplinary Summer School on Artificial Intelligence, from the 5th to the 7th of June in Vila Nova da Cerveira, Portugal. Members of the AI for People association were present to gain a perspective on current trends in AI that reflect on societal benefits and problems. In the following article, we provide a brief overview of the topics discussed in the talks at the conference and highlight implications for societal advantages or disadvantages of AI progress. Notably, not all the talks have been summarised as we focused only on those that were considered relatable to the attending members of AI for People.

Computational Creativity

Tony Veale from the Creative Language Systems Group at UCD, provides an overview of Computational Creativity (CC). This research domain aims to create machines that create meaning. Creativity is thought of as the final frontier in artificial intelligence research [1]. Creative computer systems are already a reality in our society, whether it is generated fake-news, computer-generated art or music. But are those systems truly creative or mere generative systems? The CC domain does not aim by all means to replace artists and writers with machines, but tries to develop tools which can be used in a co-creative process. Such semi-autonomous creative systems can provide computation power to explore the creative space that would not be accessible to the creators on their own. Prof. Veale’s battery of twitter-bots aims to provoke the creation of interaction within the vibrant and dynamic twitter community [2]. The holy grail of CC — developing truly creative systems, capable of criticising, explaining and creating their own masterpieces — is still considered at debatable reach.

Machines that Create Meaning (on Twitter). More creative Twitterbots at afflatus.ucd.ie.

Implications: We see Artificial Intelligence as something logical, reasonable and efficient. Often, we associate its influence with economy and technology. We might overlook that the domain of creativity, which is in its core a developing society within its culture, art and communication, equally affected by AI. We need to become aware of this influence, which works on the one hand in favour of the creative human potential by providing powerful tools that can help us develop new ideas. On the other hand, there is the potential of underestimating this creative influence and falling for fake-news and alike. The former is the benevolent use of CC, whereas the latter is the malicious (ab)use of CC.

Machine Learning in History of Science

Jochen Büttner from the MPIWG Berlin presented new tools for a long-established discipline: Using machine learning approaches for corpus research in the history of science. Büttner presents the starting point as the extraction of knowledge from the analysis of an ancient literature corpus. Conventional methods, e.g. manual identification of similar illustrations among different documents, are highly time-consuming and seen as impractical. However, machine learning techniques provide a solution to such tasks.

Büttner explained how different techniques are being utilised to detect illustrations on digitised books and identify clusters of illustration, based on the use of the same woodblocks in the printing process (shared between printers or passed on).

Credit: Jochen Büttner from the MPIWG

Implications: The research provides an interesting example of how one research field (history of science) can greatly benefit from another (artificial intelligence). With only 6 months of AI experience Prof. Büttner can achieve results that otherwise would be years of effort. Yet, from an AI perspective, the implementation is rather naive. The resulting divergence of abstract machine learning research with actual applications in other domains is clear as specialised algorithms could be used to yield better results. Challenges addressed by the talk are the rapid pace of development in ML, which already seems to be overwhelming when specialising only on Machine Learning. Overall, ML requires a rather high demand in mathematical computational understanding, which makes it even harder for foreign domains to gain access. Therefore, it is key to provide adequate educational paths for everyone and encourage the application of AI by establishing adequate publication formats, which will in return foster interdisciplinary dialogue.

Artificial Intelligence Engineering: A Critical View

The industry talk had been given by Paulo Gomes Head of AI at Critical Software. Gomes provides insights from someone who has worked for years in research switching to industry. The company is involved in several projects that use Machine Learning: identification of anomalous behaviour in vessels (navigation problems, drug traffic, illegal fishing), prediction of phone signal usage to prevent mobile network shutdowns, optimization of car manufacturing energy consumption or even decision making in stress situations in the military context. The variety of addressed domains shows the range of involvement of AI in our ‘technologised’ society.

https://xkcd.com/1425/

Implications: The talk also addresses the critical gap between what companies promise and what is actually possible with AI. This gap is not only bad to the economy, but directly harmful to people. As AI will grow, the expectations have already grown much higher than what can be achieved in neither research nor industry. The AI-hype about the massive leaps in technology due to recent developments in deep learning are somewhat justified, triggering a “New Arms Race for AI” [3] between the USA, Russia and China. The talk points out that this technological bump fits into the scheme of the hype cycle for emerging technologies with Deep Learning as the technology trigger (see image).

The hype cycle as described by the American research, advisory and information technology firm Gartner (diagram CC BY-SA 3.0).

Suddenly, every company needs to open up an AI department even though there are too few people with the actual experience in the field. A wave of job quitting and career swapping is currently being observed. Nonetheless, in most cases people find themselves with little field experience in a company that has even less — low knowledge growth and lack of appreciation due to little understanding from the company’s side. These people might end up jumping in front of rather than on top of the AI hype train.

Why superintelligent AI will never exist

This talk was given by Luc Steels from VUB Artificial Intelligence Lab (now at the evolutionary biology department at Pompeu Fabra University). In a similar fashion to the previous talk, Steels outlines the rise of AI technologies in research, economy and politics. The cycle is described in a somewhat different way and can be found in various other phenomena: Climate change has been discussed for decades, but it had been given very little actual attention in politics and economics. It is only when people are faced with the immediate consequences that politics and economics start to pick it up. In the race for AI technology, we can observe this first underestimation by a lack of development, i.e. for a long time AI struggled with its establishment in the academic world and found little attention in economy. Now, we are facing an overestimation in which everyone is creating higher and higher expectations. Why is it, that the promised superintelligent AI will not exist? Here are a few examples and implications from Steels:

  • Most Deep Learning systems are very dataset-specific and task-specific. For example, systems that are trained to recognize dogs, fail to recognize other animals or dog images that are turned upside-down. The features learned by the algorithm are irrelevant when it comes to human categorization of reality.
  • It is said that these problems can be overcome by more data. But many of those problems are due to the distribution and the probability within the data and those will not change. That is, those systems do not learn global context, even if presented with more data.
  • Language systems can be trained without a task and can be provided massive amounts of context. Yet, language is a dynamic, evolving system that changes strongly over time and context. Therefore, language models would lose their validity quickly unless they are retrained on a regular basis, which is a ridiculously effortful computation.
“A deep-learning system doesn’t have any explanatory power, the more powerful the deep-learning system becomes, the more opaque it can become. As more features are extracted, the diagnosis becomes increasingly accurate. Why these features were extracted out of millions of other features, however, remains an unanswerable question.”
Geoffrey Hinton, computer scientist at the University of Toronto — founding father of neural networks
  • The systems learn from our data, not our knowledge. Therefore, in some cases these systems do not apply any sort of common sense and take our biases into their models. For example, Microsoft’s Tay chatbot that starting spreading anti-semitism after only a few hours online [4].
  • Reinforcement learning algorithms are implemented to optimize the traffic on a web-page and not to provide content. Consequently, click-baits are more valuable to the algorithm than useful information.

Conclusion

This summer school was the first of its kind, a collaboration of the AI associations from Spain and Portugal. Despite the reduced number of participants and the lack of female speakers, this first interdisciplinary platform for the AI community provided a basic discussion about the implications of AI and its future. More people should be educated about the illusionary expectations created by the AI hype in order to prevent any damage to research and society. The author would like to thank João M. Cunha and Matteo Fabbri for their contributions to this article.

References:
[1] Colton, Simon, and Geraint A. Wiggins. “Computational creativity: The final frontier?.” Ecai. Vol. 12. 2012.
[2] Veale, Tony, and Mike Cook. Twitterbots: Making Machines that Make Meaning. MIT Press, 2018.
[3] Barnes, Julian E., and Josh Chin. “The New Arms Race in AI.” The Wall Street Journal 2 (2018).
[4] Wolf, Marty J., K. Miller, and Frances S. Grodzinsky. “Why we should have seen that coming: comments on Microsoft’s tay experiment, and wider implications.” ACM SIGCAS Computers and Society 47.3 (2017): 54–64.

New Paths for Intelligence — ISSAI19 was originally published in AI for People on Medium, where people are continuing the conversation by highlighting and responding to this story.

Layers of Responsibility for a better AI future

Do we really understand responsibilities in Artificial Intelligence or are we confusing terms and technology in the debate? (Photo credit: https://flic.kr/p/27pq9bw)

This blog post has not been written by an AI. This blog post has been written by a human intelligence pursuing a PhD in Artificial Intelligence. Although, the first sentence seems to be trivial it might not be so in the near future. If we can no longer distinguish a machine from a human during a phone call conversation, as Google Duplex has promised, we should start to be suspicious about textual content on the web. Bots are already made responsible for 24% of all tweets on Twitter. Who is responsible for all this spam?

But really, this blog post has not been written by an AI — trust me. If it were, it would be much smarter, more eloquent and intelligent, because eventually AI systems will make better decisions than humans. And the whole argument about responsible AI, is more of an argument about how we define better in the previous sentence. But first let’s point out, that the ongoing discussion about responsible AI often conflates at least two levels of understanding algorithms:

  • Artificial Intelligence in the sense of machine learning applications
  • General Artificial Intelligence in the sense of an above-human-intelligence system

This blog post does not aim to blur the line between humans and machines, neither does it aim to provide answers to ethical questions that arise from artificial intelligence. In fact, this blog post simply tries to contrast the conflated layers of AI responsibility and presents a few contemporary approaches at either of those layers.

Artificial Intelligence in the sense of machine learning applications

In recent years, we have definitely reached the first level of AI that does already present us with an ethical dilemma in a variety of applications: Autonomous cars, automated manufacturing and chatbots. Who is responsible for the accident by self-driving car? Can a car decide in face of a moral dilemma that even humans struggle to agree on? How can technical advances can be combined with education programs (human resource development) to help workers practice new sophisticated skills so as not to lose their jobs? Do we need to declare online identities (is it a person or a bot?). How do we control for manipulation of emotions through social bots?

These are all questions that we are already facing. The artificial intelligence that gives rise to these questions is a controllable system, that means that its human creator (the programmer, company or government) can decide how the algorithm should be designed such that the resulting behaviour abides whatever rules follow from the answers to the given questions. The responsibility is therefore with the human. The same way we sell hammers, which can be used as a tool or abused as a weapon, we are not responsible for malicious abuse of AI systems. Whether for good or bad, these AI systems show adaptability, interaction and autonomy, which can be layered with their respective confines.

Chart taken from Virginia Dignum, Associate professor on Social Artificial Intelligence at TU Delft — Design and evaluation of human agent teamwork.

Autonomy has to act within the bounds of responsibility, which includes the chain of responsible actors: If we give full autonomy to a system, we cannot take responsibility for its actions, but as we do not have fully autonomous systems yet, the responsibility is with the programmers followed by some supervision, which normally follows company standards. Within this well-established chain of responsibility that is in place in most industrial companies, we need to locate the responsibilities for AI system with respect to their degree of autonomy. The other two properties, adaptability and interaction, do directly contribute to the responsibility we can have over a system. If we allow full interaction of the system, we lose accountability, hence we give away responsibility again. Accountability cannot only be about the algorithms, but about the interaction must provide an explanation and justification to be accountable and consequently responsible. Each of these values is more than just a difficult balancing act, they pose intricate challenges in their very definition. Consider explainability of accountable AI, we already see the surge of an entire field called XAI (Explainable Artificial Intelligence). Nonetheless, we cannot simply start explaining AI algorithms on the basis of their code for everyone, firstly we need to come up with a feasible level of an explanation. Do we make the code open-source and leave the explanation to the user? Do we provide security labels? Can we define quality standards for AI?

The latter has been suggested by the High-Level Expert Group on AI of the European AI Alliance. This group of 52 experts includes engineers, researchers, economists, lawyers and philosophers from an academic, non-academic, corporate and non-corporate institutions. The first draft on Ethics Guidelines For Trustworthy AI proposes guiding principles, investment and policy strategies and in particular advises on how to use AI to build an impact in Europe by leveraging Europe’s enablers of AI.

On the one hand, the challenges seem to be broad and coming up with ethics guidelines that encompass all possible scenarios appears to be a daunting task. On the other hand, all of these questions are not new to us. Aviation has a thorough and practicable set of guidelines, laws and regulations that allow us to trust systems which already are mostly autonomous and yet we do not ask for an explanation of its autopilot software. We cannot simply transfer those rules for all autonomous applications, but we should concern ourselves with the importance of those guidelines and not condemn the task.

General Artificial Intelligence in the sense of an above-human-intelligence system

In the previous discussion, we have seen that the problems that arise from the first level of AI systems does impact us today and that we are dealing with those problems one way or the other. The discussion should be different if we talk about General Artificial Intelligence. Here, we assume that at some point the computing power of a machine supersedes not only the computing power of a human brain (which is already the case), but gives rise to an intelligence that supersedes human intelligence. At this point, it has been argued that this will trigger an unprecedented jump in technological growth, resulting in incalculable changes to human civilization — the so-called technological singularity.

In this scenario, we no longer deal with a tractable algorithm, as the super-intelligence will be capable of rewriting any sort of rule or guideline it deems to be trivial. There would be no way of preventing the system to breach any security barrier that has been constructed using human intelligence. There are many scenarios which predict that such an intelligence will eventually get rid of humans or will enslave mankind (see Bostrom’s Superintelligence or The Wachowskis’ Matrix Trilogy). But there is also a surge of serious research institutions, which aim to argue for alternative scenarios and how we can align such an AI system with our values. We see that this second level of AI has much larger consequences with questions that can only be based on theoretical assumptions, rather than pragmatic guidelines or implementations.

An issue that arises from the conflation of the two layers is that people tend to mistrust a self-driving car, as they attribute some form of general intelligence to the system that is not (yet) there. Currently, autonomous self-driving cars only avoid obstacles and are not even aware of the type of object (man, child, dog). Furthermore, all the apocalyptic scenarios contain the same sort of fallacy, they argue using human logic. We simply cannot conceive a logic that would supersede our cognition. Any sort of ethical principle, moral guideline or logical conclusion we want to attribute the AI with, has been derived from thousands of years of human reasoning. A super-intelligent system might therefore evolve this reasoning within a split second to a degree of which it would takes us another thousands of years understanding this step. Therefore, any sort of imagination we have about the future past the point of a super-intelligence is as imaginative as religious imaginations. Interestingly, this conflation of thoughts has lead to the founding of “The Church of Artificial Intelligence”.

Responsibility at both levels

My responsibility as an AI research is to educate people about the technology that they are using and the technology that they will be facing. In case of technology that is already in place, we have to disentangle the notion of Artificial Intelligence as an uncontrollable super-power, which will overtake humanity. As pointed out, the responsibility for responsible AI is with the governments, institutions and programmers. The former need to set guidelines and make sure that they are being followed, the latter two need to follow them. At this stage, it is all about the people to create the rules that they want the AI to follow.

Artificial intelligence is happening and it will not stop merging with our society. It is probably the strongest change of civilization since the invention of the steam engine. On the one hand, the industrial revolution lead to great progress and wealth for most of humankind. On the other hand, it lead to great destruction of our environment, climate and planet. These were consequence we did not anticipate or were willing to accept, consequences which are leading us to the brink of our own extinction if no counter-action will be taken. Similar will be true for the advent of AI that we are currently witnessing. It can lead to great benefits, wealth and progress for most of the technological world, but we are responsible that the consequence are not pushing us over the brink of extinction. Even though we might not be able to anticipate all the consequences, we as the society have the responsibility to act with caution and thoroughness. To conclude with Spiderman’s Uncle’s words “with great power comes great responsibility”. And as AI might be the greatest and last power to be created by humans, it might be to great of a responsibility or it will be smart enough to be responsible for itself.


Layers of Responsibility for a better AI future was originally published in AI for People on Medium, where people are continuing the conversation by highlighting and responding to this story.

Layers of Responsibility for a better AI future

Do we really understand responsibilities in Artificial Intelligence or are we confusing terms and technology in the debate? (Photo credit: https://flic.kr/p/27pq9bw)

This blog post has not been written by an AI. This blog post has been written by a human intelligence pursuing a PhD in Artificial Intelligence. Although, the first sentence seems to be trivial it might not be so in the near future. If we can no longer distinguish a machine from a human during a phone call conversation, as Google Duplex has promised, we should start to be suspicious about textual content on the web. Bots are already made responsible for 24% of all tweets on Twitter. Who is responsible for all this spam?

But really, this blog post has not been written by an AI — trust me. If it were, it would be much smarter, more eloquent and intelligent, because eventually AI systems will make better decisions than humans. And the whole argument about responsible AI, is more of an argument about how we define better in the previous sentence. But first let’s point out, that the ongoing discussion about responsible AI often conflates at least two levels of understanding algorithms:

  • Artificial Intelligence in the sense of machine learning applications
  • General Artificial Intelligence in the sense of an above-human-intelligence system

This blog post does not aim to blur the line between humans and machines, neither does it aim to provide answers to ethical questions that arise from artificial intelligence. In fact, this blog post simply tries to contrast the conflated layers of AI responsibility and presents a few contemporary approaches at either of those layers.

Artificial Intelligence in the sense of machine learning applications

In recent years, we have definitely reached the first level of AI that does already present us with an ethical dilemma in a variety of applications: Autonomous cars, automated manufacturing and chatbots. Who is responsible for the accident by self-driving car? Can a car decide in face of a moral dilemma that even humans struggle to agree on? How can technical advances can be combined with education programs (human resource development) to help workers practice new sophisticated skills so as not to lose their jobs? Do we need to declare online identities (is it a person or a bot?). How do we control for manipulation of emotions through social bots?

These are all questions that we are already facing. The artificial intelligence that gives rise to these questions is a controllable system, that means that its human creator (the programmer, company or government) can decide how the algorithm should be designed such that the resulting behaviour abides whatever rules follow from the answers to the given questions. The responsibility is therefore with the human. The same way we sell hammers, which can be used as a tool or abused as a weapon, we are not responsible for malicious abuse of AI systems. Whether for good or bad, these AI systems show adaptability, interaction and autonomy, which can be layered with their respective confines.

Chart taken from Virginia Dignum, Associate professor on Social Artificial Intelligence at TU Delft — Design and evaluation of human agent teamwork.

Autonomy has to act within the bounds of responsibility, which includes the chain of responsible actors: If we give full autonomy to a system, we cannot take responsibility for its actions, but as we do not have fully autonomous systems yet, the responsibility is with the programmers followed by some supervision, which normally follows company standards. Within this well-established chain of responsibility that is in place in most industrial companies, we need to locate the responsibilities for AI system with respect to their degree of autonomy. The other two properties, adaptability and interaction, do directly contribute to the responsibility we can have over a system. If we allow full interaction of the system, we lose accountability, hence we give away responsibility again. Accountability cannot only be about the algorithms, but about the interaction must provide an explanation and justification to be accountable and consequently responsible. Each of these values is more than just a difficult balancing act, they pose intricate challenges in their very definition. Consider explainability of accountable AI, we already see the surge of an entire field called XAI (Explainable Artificial Intelligence). Nonetheless, we cannot simply start explaining AI algorithms on the basis of their code for everyone, firstly we need to come up with a feasible level of an explanation. Do we make the code open-source and leave the explanation to the user? Do we provide security labels? Can we define quality standards for AI?

The latter has been suggested by the High-Level Expert Group on AI of the European AI Alliance. This group of 52 experts includes engineers, researchers, economists, lawyers and philosophers from an academic, non-academic, corporate and non-corporate institutions. The first draft on Ethics Guidelines For Trustworthy AI proposes guiding principles, investment and policy strategies and in particular advises on how to use AI to build an impact in Europe by leveraging Europe’s enablers of AI.

On the one hand, the challenges seem to be broad and coming up with ethics guidelines that encompass all possible scenarios appears to be a daunting task. On the other hand, all of these questions are not new to us. Aviation has a thorough and practicable set of guidelines, laws and regulations that allow us to trust systems which already are mostly autonomous and yet we do not ask for an explanation of its autopilot software. We cannot simply transfer those rules for all autonomous applications, but we should concern ourselves with the importance of those guidelines and not condemn the task.

General Artificial Intelligence in the sense of an above-human-intelligence system

In the previous discussion, we have seen that the problems that arise from the first level of AI systems does impact us today and that we are dealing with those problems one way or the other. The discussion should be different if we talk about General Artificial Intelligence. Here, we assume that at some point the computing power of a machine supersedes not only the computing power of a human brain (which is already the case), but gives rise to an intelligence that supersedes human intelligence. At this point, it has been argued that this will trigger an unprecedented jump in technological growth, resulting in incalculable changes to human civilization — the so-called technological singularity.

In this scenario, we no longer deal with a tractable algorithm, as the super-intelligence will be capable of rewriting any sort of rule or guideline it deems to be trivial. There would be no way of preventing the system to breach any security barrier that has been constructed using human intelligence. There are many scenarios which predict that such an intelligence will eventually get rid of humans or will enslave mankind (see Bostrom’s Superintelligence or The Wachowskis’ Matrix Trilogy). But there is also a surge of serious research institutions, which aim to argue for alternative scenarios and how we can align such an AI system with our values. We see that this second level of AI has much larger consequences with questions that can only be based on theoretical assumptions, rather than pragmatic guidelines or implementations.

An issue that arises from the conflation of the two layers is that people tend to mistrust a self-driving car, as they attribute some form of general intelligence to the system that is not (yet) there. Currently, autonomous self-driving cars only avoid obstacles and are not even aware of the type of object (man, child, dog). Furthermore, all the apocalyptic scenarios contain the same sort of fallacy, they argue using human logic. We simply cannot conceive a logic that would supersede our cognition. Any sort of ethical principle, moral guideline or logical conclusion we want to attribute the AI with, has been derived from thousands of years of human reasoning. A super-intelligent system might therefore evolve this reasoning within a split second to a degree of which it would takes us another thousands of years understanding this step. Therefore, any sort of imagination we have about the future past the point of a super-intelligence is as imaginative as religious imaginations. Interestingly, this conflation of thoughts has lead to the founding of “The Church of Artificial Intelligence”.

Responsibility at both levels

My responsibility as an AI research is to educate people about the technology that they are using and the technology that they will be facing. In case of technology that is already in place, we have to disentangle the notion of Artificial Intelligence as an uncontrollable super-power, which will overtake humanity. As pointed out, the responsibility for responsible AI is with the governments, institutions and programmers. The former need to set guidelines and make sure that they are being followed, the latter two need to follow them. At this stage, it is all about the people to create the rules that they want the AI to follow.

Artificial intelligence is happening and it will not stop merging with our society. It is probably the strongest change of civilization since the invention of the steam engine. On the one hand, the industrial revolution lead to great progress and wealth for most of humankind. On the other hand, it lead to great destruction of our environment, climate and planet. These were consequence we did not anticipate or were willing to accept, consequences which are leading us to the brink of our own extinction if no counter-action will be taken. Similar will be true for the advent of AI that we are currently witnessing. It can lead to great benefits, wealth and progress for most of the technological world, but we are responsible that the consequence are not pushing us over the brink of extinction. Even though we might not be able to anticipate all the consequences, we as the society have the responsibility to act with caution and thoroughness. To conclude with Spiderman’s Uncle’s words “with great power comes great responsibility”. And as AI might be the greatest and last power to be created by humans, it might be to great of a responsibility or it will be smart enough to be responsible for itself.


Layers of Responsibility for a better AI future was originally published in AI for People on Medium, where people are continuing the conversation by highlighting and responding to this story.

The ethics of algorithmic fairness

Once applied to risk assessment in the criminal justice system, are we deceiving ourselves on the wrong track?

This article questions the current undertakings of the ethical debate surrounding predictive risk assessment in the criminal justice system. In this context, the ethical debate currently revolves around how to engage in practices of predicting criminal behaviour through machine learning in ethical ways [1]; for example, how to reduce bias while maintaining accuracy. This is far from fundamentally questioning for which purpose we want to operationalise ML algorithms for; should we use them to predict criminal behaviour or rather to diagnose it, intervene on it and most importantly, to better understand it? Each approach comes with a different method for risk assessment; prediction with regression while diagnosis with causal inference [2]. I argue that, if the purpose of the criminal justice system is to treat crime rather than forecast it and to monitor the effects on crime of its own interventions — whether they increase or reduce crime — , then focusing our ethical debates on prediction is to deceive ourselves on the wrong track. Let us have a look at the present situation.

https://www.technologyreview.com/s/607955/inspecting-algorithms-for-bias/

Prediction

Algorithmic Fairness

In matters of ‘ethical’ prediction of criminal behaviour, the branch of algorithmic fairness has recently had a lot to say. Exponents of algorithmic fairness have identified numerous ways in which statistically-driven methods like machine learning can reproduce existing patterns of individual prejudice and institutionalised bias [3]. While some focus on reducing bias in the design process, some focus on reducing it in the outcomes. Overall, they emphasise the importance of predictive parity as an explicit goal; that is, the systems we use should not only be equally accurate, but also have similar accuracy rates over all test groups (e.g. different racial groups or genders) to which they are applied [4]. Sadly, Kleinberg et al. [5] proved that no mechanism can achieve optimal accuracy and optimal predictive parity and that a trade-off is needed between the two. Others responded by presenting alternative conceptions of fairness. Nevertheless, often, conceptions of fairness not only differ, but they are also at odds with each other. This is shown by the work of Berk. et al. [6]. This identifies six kinds of fairness to then show that not only these notions conflict with accuracy, but also with one another.

Is the impossibility of accuracy in the face of fairness a problem? It is if we frame the added value of our algorithms to be predictive accuracy. However, in the criminal justice system, is this their added-value or in that they can effectively support judges in making better decisions about whom to release, and under what conditions (i.e. how should the criminal justice system intervene in an individual’s life to mitigate specific, relevant risks)? In the first case, we vouch to measure the utility of an algorithm as a predictive tool, while in the second as a diagnostic tool.

Predictive tools are often based on regression analysis. Regression enables researchers to identify variables that are predictive of an outcome of interest, without necessarily having to understand why that factor is significant [7]. For example, given high risk of criminal re-offence, regression analysis would identify the variables that are correlated with it, like anti-social behaviour or criminal history. However, it would not offer any insights on why this correlation happens. Furthermore, it treats this ‘risk’ as a statistical fact about the world, as static. In front of the predicted statistically high-risk of someone re-committing a crime given his anti-social behaviour, regression can only suggest us what to avoid to tackle this ‘given’ risk; to release the criminal. It forecasts crime cycles in order to arrive there first, to beat time and to play crime’s own game, rather than to treat it. Instead, diagnostic tools present risk as a dynamic phenomenon, as something that can be mitigated through interventions. The next paragraph will explain this better.

https://medium.com/@gamesetmax/person-of-interest-chicagos-predictive-policing-how-does-it-work-f3ec382fa3b1

Intervention

Causal Inference

When we talk about using statistical tools for diagnosis, we refer to causal inference. Through causal inference we can hypothesise and test causal relationships between covariates and the outcome variable of interest [8]. Here, the risk of someone re-committing a crime is framed as an outcome on which the effect of different covariates — e.g. anti-social behaviour, unemployment… — can be tested in their causal import. ‘Risk’ here is presented as a dynamic phenomenon, as something that can be changed by intervening on what causes it. Now, why is this important? I think it is important for two main reasons. First, because it is in the interest of the judicial system to learn how to treat crime. Second, because it is also in our interest to monitor the effects on crime of the criminal justice system itself. How can causal inference grant this?

As per ‘treating crime’, it grants it in two ways. On one hand, it allows us to isolate and test its potential causes and cures. In causal inference, causality is inferred by randomly assigning individuals or groups, referred to as units, to an intervention or treatment [9]. This happens especially in medicine. Each unit subjected to a treatment may realise an outcome of interest, and upon receiving no treatment may realise an alternate outcome, also known as the counter-factual. Randomly assigning units to both ‘treatment’ and ‘no-treatment’ group and comparing the potential outcome after the application, gives a measure of the causal effect of the chosen intervention [10]. The random assignment of units to treatments ensures a “balance” of the covariates — potential confounding factors — , thereby isolating the applied treatment as the causal driver (or not). For example, given two randomly-assigned groups of convicts, where one is subjected to behavioural therapy while the other is not, it is possible to assess the effect of behavioural therapy as an intervention on criminal or violent behaviour. While these trials are well-established in the medical field, the criminal system often rejects this possibility for ethical concerns [11].

On the other, we are also able to measure the impact of timing and duration of the applied intervention, an advantage severely lacking in regression-based methods and something crucial if we want to effectively intervene on crime [12]. For example, research has shown that the timing of the initiation of behavioural therapy has an effect not only on prison conduct of defendants, but also on the risk of recidivism [13]. A causal inference framework can suggest when it is best to initiate behavioural therapy as an intervention.

As per examining the potential effects on crime of the criminal justice system itself, causal inference allows us to separate covariates that are not impacted by intervention from intermediate outcomes that our interventions impact [14]. This is important to estimate the effects of interventions of the criminal justice system on crime itself. For example, anti-social behaviour is often not found to be the driver of re-offence. Instead, anti-social behaviour is often shown to increase with intensive policing and thus, it is an intermediate outcome of the criminal justice system intervening efforts to reduce crime rather than a separate covariate.

Conclusion

Overall, matters are not so easy. Often, efforts to conduct this kind of research are hindered by ethical concerns — concerns that strangely, do not prevent this research to be conducted in the medical field. Additionally, it is not always possible to test our hypothesis under experimental conditions [15], especially when what is tested are the potential drivers of crime. However, alternative methods and strategies are often used by applying similar methods to observational data [16]. Notwithstanding the limitations, the potential benefits and insights into what drives crime as a structural problem rather than a statistical fact at least deserve more attention.

The ethics of algorithmic fairness was originally published in AI for People on Medium, where people are continuing the conversation by highlighting and responding to this story.


The ethics of algorithmic fairness was originally published in AI for People on Medium, where people are continuing the conversation by highlighting and responding to this story.

The ethics of algorithmic fairness

Once applied to risk assessment in the criminal justice system, are we deceiving ourselves on the wrong track?

This article questions the current undertakings of the ethical debate surrounding predictive risk assessment in the criminal justice system. In this context, the ethical debate currently revolves around how to engage in practices of predicting criminal behaviour through machine learning in ethical ways [1]; for example, how to reduce bias while maintaining accuracy. This is far from fundamentally questioning for which purpose we want to operationalise ML algorithms for; should we use them to predict criminal behaviour or rather to diagnose it, intervene on it and most importantly, to better understand it? Each approach comes with a different method for risk assessment; prediction with regression while diagnosis with causal inference [2]. I argue that, if the purpose of the criminal justice system is to treat crime rather than forecast it and to monitor the effects on crime of its own interventions — whether they increase or reduce crime — , then focusing our ethical debates on prediction is to deceive ourselves on the wrong track. Let us have a look at the present situation.
https://www.technologyreview.com/s/607955/inspecting-algorithms-for-bias/
Continue reading "The ethics of algorithmic fairness"

The ethics of AI and ML ethical codes

This post was inspired by the reading of ‘Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning’ by Daniel Green, Anna Lauren Hoffmann and Luke Stark [1]. The paper analysed public statements issued by independent institutions — varying from ‘Open AI’, ‘The Partnership on AI’, ‘The Montreal Declaration for a Responsible Development of Artificial Intelligence’, ‘The Toronto Declaration: Protecting the rights to equality and non-discrimination in machine learning systems’, etc… — on ethical approaches to Artificial Intelligence (AI) and Machine learning (ML). Overall, the researchers’ aim was to uncover assumptions and common themes across the statements to spot which, among those, foster ethical discourse and what hinder it. This article by no means attempts to reproduce the content of the researchers’ paper. Conversely, it aims at building on some interesting considerations that emerged from the paper and that, in my opinion, deserve further scrutiny.

Across the sample of public statements examined, what emerged was an overall deterministic vision of AI and ML. Deterministic in what way? Generally, the adjective ‘deterministic’ refers to all events that are ultimately determined by causes regarded as external to the will [2]. Once applied to the AI/ML discourse, ‘deterministic’ refers to the nature, development and impact of AI/ML having causes that are external to the human will. In philosophy, the term ‘determinism’ is often in conflict with the idea of ‘ethics’. What is anyway ‘determined’ is beyond our questions of ethics because we simply have no control over it. Therefore, it might seem puzzling that this is one of the themes or assumptions emerging from statements of the AI and ML codes of ethics. I think this is what deserves further scrutiny.

Through their analysis, the researchers identified seven core themes that were recurring in these AI/ML codes of ethics. I focus specifically on two; ‘values-driven determinism’ and ‘expert oversight’. As per the former, it recalls the idea that AI and ML are inevitable, impactful forces that we should nevertheless shape and dress-up with our own human values. The statements present deterministic framings of AI/ML. They present them as world-historical forces of change — inevitable seismic shifts to which humans can only react [3]. Paradoxically, AI/ML are also at the same time described as ‘values-driven’, insofar as human beings create them. For example, in the Montreal Declaration, there is overriding hope that “AI will make our societies better” [4]. This hope co-exists with sections exploring individual values such as Justice that range between instrumental impact (e.g., “What types of discrimination could AI create or exacerbate?”) and active human agency (e.g., “Should the development of AI be neutral or should it seek to reduce social and economic inequalities?”) [5]. Similarly, the Open AI charter aims at tackling the medium-term impact of inevitablehighly autonomous systems that outperform humans at most economically valuable work”, by collaborating on “value-aligned, safety-conscious project[s]” [6] in the present and near-term. These scenarios are ones in which AI and ML are inevitable forces to which we must adapt but for which, at the same time, we are also responsible [7].

This seemingly paradoxical aspect is probably simply explainable by the gap in expertise between AI experts and the wider population. What is ‘deterministic’ might not be the deterministic aspect of the technology but the determining role of the experts versus the ‘people’ in shaping the future of AI and ML. As the researchers suggest, ‘human agency is integral to ethical design, but it is largely a property of experts responsible for the design, implementation and, sometimes, oversight of AI/ML’ [8].

This brings us to the other core point in our focus; ‘expert oversight’. These statements frame ethical codes as a project of expert oversight, where technical and legal experts come together to articulate concerns and implement primarily technical, and secondarily legal solutions [9]. Nevertheless, they assume a universal community with ethical concerns in their statements. Despite doing the latter, these vision statements are not documents that refer to any mass mobilisation or participation. On one hand, this is a difficult issue to solve. In fact, the gap in knowledge between experts and the rest is a real issue as these ethical frameworks much depend on technical aspects and on the design of AI and ML technologies. Furthermore, many institutions such as the Toronto Declaration and Axon’s Ethics Board have tried to be more inclusive towards a not-strictly-technical public [10]. On the other, nevertheless, this state of affairs prevents civil society to exercise its role as a critical, democratic unit. Inevitably, experts, in their drafted ‘codes of ethics’, will be more likely to detail their own responsibilities as the responsibilities of individual professionals. However, they most likely will not actively scrutinise the nature of the profession or of the business in question itself. This is something more similar to codes of business ethics [11] rather than the ideals of political philosophy and ethics — justice, fairness… — that the codes seem to aspire to. While business ethics is concerned with engaging in business practices in ethical ways, political philosophy fundamentally questions — human/social — practices themselves. Similarly, the present approach to ethical codes prevents the possibility for the questioning of the practice or the ‘nature’ of AI and ML themselves.

Overall, these considerations suggest that there might be some fundamental contradictions in the way we approach AI/ML codes of ethics. On one hand, the so-framed deterministic aspect of the technology is in contrast with our capacity to shape it. What is anyway ‘inevitable’ goes beyond our ethical scope. On the other, the determining role of the experts prevents a fundamental questioning of the AI and ML systems. Both these issues stand on the way to a honest and coherent path to AI and ML ethical codes.

The ethics of AI and ML ethical codes was originally published in AI for People on Medium, where people are continuing the conversation by highlighting and responding to this story.


The ethics of AI and ML ethical codes was originally published in AI for People on Medium, where people are continuing the conversation by highlighting and responding to this story.

The ethics of AI and ML ethical codes

This post was inspired by the reading of ‘Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning’ by Daniel Green, Anna Lauren Hoffmann and Luke Stark [1]. The paper analysed public statements issued by independent institutions — varying from ‘Open AI’, ‘The Partnership on AI’, ‘The Montreal Declaration for a Responsible Development of Artificial Intelligence’, ‘The Toronto Declaration: Protecting the rights to equality and non-discrimination in machine learning systems’, etc… — on ethical approaches to Artificial Intelligence (AI) and Machine learning (ML). Overall, the researchers’ aim was to uncover assumptions and common themes across the statements to spot which, among those, foster ethical discourse and what hinder it. This article by no means attempts to reproduce the content of the researchers’ paper. Conversely, it aims at building on some interesting considerations that emerged from the paper and that, in my opinion, deserve further scrutiny.
Continue reading "The ethics of AI and ML ethical codes"

On the Myth of AI Democratization

Co-written by Vincenzo Lomonaco and Marta Ziosi

“The world’s most valuable resource is no longer oil, but data.” — Copyright © David Parkins, The Economist [1]
The last decade has witnessed tremendous advancements in the context of Artificial Intelligence (AI) to the point that many are framing it not only as a groundbreaking technology but even as “the new electricity” echoing the unique impact its analogue counterpart had and still has on our society. Despite the great hype and inflated hopes for the imminent future, it is undeniable that recent advances in AI under the name of “Deep Learning” or the more recent rebranding “Differentiable Programming” have radically pushed the boundaries of what’s possible, enabling a rich set of applications which were even unthinkable before. AI technologies are now employed in almost any digital product or service we daily use (movie recommendations, on-line shopping, smart home devices, surveillance systems, etc…) but also for ground-braking, innovative frontiers like self-driving cars, personalized health-care and many others. In a context in which many have already expressed concerns about the power and pervasiveness of such technologies [1][2], major IT companies are publicly declaring their will to democratize AI, making it “for every person and every organization” [3][4][5], but also open to developers and researchers around the world through the transparent works of their top-notch AI research labs [6][7][8]. In this (not-so-brief) post we give a better look at this AI democratization process, hoping to spark a new interest in the subject and start talking more about something we think is going to strongly affect our present and future society. Continue reading "On the Myth of AI Democratization"