Layers of Responsibility for a better AI future

Do we really understand responsibilities in Artificial Intelligence or are we confusing terms and technology in the debate? (Photo credit: https://flic.kr/p/27pq9bw)

This blog post has not been written by an AI. This blog post has been written by a human intelligence pursuing a PhD in Artificial Intelligence. Although, the first sentence seems to be trivial it might not be so in the near future. If we can no longer distinguish a machine from a human during a phone call conversation, as Google Duplex has promised, we should start to be suspicious about textual content on the web. Bots are already made responsible for 24% of all tweets on Twitter. Who is responsible for all this spam?

But really, this blog post has not been written by an AI — trust me. If it were, it would be much smarter, more eloquent and intelligent, because eventually AI systems will make better decisions than humans. And the whole argument about responsible AI, is more of an argument about how we define better in the previous sentence. But first let’s point out, that the ongoing discussion about responsible AI often conflates at least two levels of understanding algorithms:

  • Artificial Intelligence in the sense of machine learning applications
  • General Artificial Intelligence in the sense of an above-human-intelligence system

This blog post does not aim to blur the line between humans and machines, neither does it aim to provide answers to ethical questions that arise from artificial intelligence. In fact, this blog post simply tries to contrast the conflated layers of AI responsibility and presents a few contemporary approaches at either of those layers.

Artificial Intelligence in the sense of machine learning applications

In recent years, we have definitely reached the first level of AI that does already present us with an ethical dilemma in a variety of applications: Autonomous cars, automated manufacturing and chatbots. Who is responsible for the accident by self-driving car? Can a car decide in face of a moral dilemma that even humans struggle to agree on? How can technical advances can be combined with education programs (human resource development) to help workers practice new sophisticated skills so as not to lose their jobs? Do we need to declare online identities (is it a person or a bot?). How do we control for manipulation of emotions through social bots?

These are all questions that we are already facing. The artificial intelligence that gives rise to these questions is a controllable system, that means that its human creator (the programmer, company or government) can decide how the algorithm should be designed such that the resulting behaviour abides whatever rules follow from the answers to the given questions. The responsibility is therefore with the human. The same way we sell hammers, which can be used as a tool or abused as a weapon, we are not responsible for malicious abuse of AI systems. Whether for good or bad, these AI systems show adaptability, interaction and autonomy, which can be layered with their respective confines.

Chart taken from Virginia Dignum, Associate professor on Social Artificial Intelligence at TU Delft — Design and evaluation of human agent teamwork.

Autonomy has to act within the bounds of responsibility, which includes the chain of responsible actors: If we give full autonomy to a system, we cannot take responsibility for its actions, but as we do not have fully autonomous systems yet, the responsibility is with the programmers followed by some supervision, which normally follows company standards. Within this well-established chain of responsibility that is in place in most industrial companies, we need to locate the responsibilities for AI system with respect to their degree of autonomy. The other two properties, adaptability and interaction, do directly contribute to the responsibility we can have over a system. If we allow full interaction of the system, we lose accountability, hence we give away responsibility again. Accountability cannot only be about the algorithms, but about the interaction must provide an explanation and justification to be accountable and consequently responsible. Each of these values is more than just a difficult balancing act, they pose intricate challenges in their very definition. Consider explainability of accountable AI, we already see the surge of an entire field called XAI (Explainable Artificial Intelligence). Nonetheless, we cannot simply start explaining AI algorithms on the basis of their code for everyone, firstly we need to come up with a feasible level of an explanation. Do we make the code open-source and leave the explanation to the user? Do we provide security labels? Can we define quality standards for AI?

The latter has been suggested by the High-Level Expert Group on AI of the European AI Alliance. This group of 52 experts includes engineers, researchers, economists, lawyers and philosophers from an academic, non-academic, corporate and non-corporate institutions. The first draft on Ethics Guidelines For Trustworthy AI proposes guiding principles, investment and policy strategies and in particular advises on how to use AI to build an impact in Europe by leveraging Europe’s enablers of AI.

On the one hand, the challenges seem to be broad and coming up with ethics guidelines that encompass all possible scenarios appears to be a daunting task. On the other hand, all of these questions are not new to us. Aviation has a thorough and practicable set of guidelines, laws and regulations that allow us to trust systems which already are mostly autonomous and yet we do not ask for an explanation of its autopilot software. We cannot simply transfer those rules for all autonomous applications, but we should concern ourselves with the importance of those guidelines and not condemn the task.

General Artificial Intelligence in the sense of an above-human-intelligence system

In the previous discussion, we have seen that the problems that arise from the first level of AI systems does impact us today and that we are dealing with those problems one way or the other. The discussion should be different if we talk about General Artificial Intelligence. Here, we assume that at some point the computing power of a machine supersedes not only the computing power of a human brain (which is already the case), but gives rise to an intelligence that supersedes human intelligence. At this point, it has been argued that this will trigger an unprecedented jump in technological growth, resulting in incalculable changes to human civilization — the so-called technological singularity.

In this scenario, we no longer deal with a tractable algorithm, as the super-intelligence will be capable of rewriting any sort of rule or guideline it deems to be trivial. There would be no way of preventing the system to breach any security barrier that has been constructed using human intelligence. There are many scenarios which predict that such an intelligence will eventually get rid of humans or will enslave mankind (see Bostrom’s Superintelligence or The Wachowskis’ Matrix Trilogy). But there is also a surge of serious research institutions, which aim to argue for alternative scenarios and how we can align such an AI system with our values. We see that this second level of AI has much larger consequences with questions that can only be based on theoretical assumptions, rather than pragmatic guidelines or implementations.

An issue that arises from the conflation of the two layers is that people tend to mistrust a self-driving car, as they attribute some form of general intelligence to the system that is not (yet) there. Currently, autonomous self-driving cars only avoid obstacles and are not even aware of the type of object (man, child, dog). Furthermore, all the apocalyptic scenarios contain the same sort of fallacy, they argue using human logic. We simply cannot conceive a logic that would supersede our cognition. Any sort of ethical principle, moral guideline or logical conclusion we want to attribute the AI with, has been derived from thousands of years of human reasoning. A super-intelligent system might therefore evolve this reasoning within a split second to a degree of which it would takes us another thousands of years understanding this step. Therefore, any sort of imagination we have about the future past the point of a super-intelligence is as imaginative as religious imaginations. Interestingly, this conflation of thoughts has lead to the founding of “The Church of Artificial Intelligence”.

Responsibility at both levels

My responsibility as an AI research is to educate people about the technology that they are using and the technology that they will be facing. In case of technology that is already in place, we have to disentangle the notion of Artificial Intelligence as an uncontrollable super-power, which will overtake humanity. As pointed out, the responsibility for responsible AI is with the governments, institutions and programmers. The former need to set guidelines and make sure that they are being followed, the latter two need to follow them. At this stage, it is all about the people to create the rules that they want the AI to follow.

Artificial intelligence is happening and it will not stop merging with our society. It is probably the strongest change of civilization since the invention of the steam engine. On the one hand, the industrial revolution lead to great progress and wealth for most of humankind. On the other hand, it lead to great destruction of our environment, climate and planet. These were consequence we did not anticipate or were willing to accept, consequences which are leading us to the brink of our own extinction if no counter-action will be taken. Similar will be true for the advent of AI that we are currently witnessing. It can lead to great benefits, wealth and progress for most of the technological world, but we are responsible that the consequence are not pushing us over the brink of extinction. Even though we might not be able to anticipate all the consequences, we as the society have the responsibility to act with caution and thoroughness. To conclude with Spiderman’s Uncle’s words “with great power comes great responsibility”. And as AI might be the greatest and last power to be created by humans, it might be too great of a responsibility or it will be smart enough to be responsible for itself.


Layers of Responsibility for a better AI future was originally published in AI for People on Medium, where people are continuing the conversation by highlighting and responding to this story.

The ethics of algorithmic fairness

Once applied to risk assessment in the criminal justice system, are we deceiving ourselves on the wrong track?

This article questions the current undertakings of the ethical debate surrounding predictive risk assessment in the criminal justice system. In this context, the ethical debate currently revolves around how to engage in practices of predicting criminal behaviour through machine learning in ethical ways [1]; for example, how to reduce bias while maintaining accuracy. This is far from fundamentally questioning for which purpose we want to operationalise ML algorithms for; should we use them to predict criminal behaviour or rather to diagnose it, intervene on it and most importantly, to better understand it? Each approach comes with a different method for risk assessment; prediction with regression while diagnosis with causal inference [2]. I argue that, if the purpose of the criminal justice system is to treat crime rather than forecast it and to monitor the effects on crime of its own interventions — whether they increase or reduce crime — , then focusing our ethical debates on prediction is to deceive ourselves on the wrong track. Let us have a look at the present situation.
https://www.technologyreview.com/s/607955/inspecting-algorithms-for-bias/
Continue reading "The ethics of algorithmic fairness"

The ethics of AI and ML ethical codes

This post was inspired by the reading of ‘Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning’ by Daniel Green, Anna Lauren Hoffmann and Luke Stark [1]. The paper analysed public statements issued by independent institutions — varying from ‘Open AI’, ‘The Partnership on AI’, ‘The Montreal Declaration for a Responsible Development of Artificial Intelligence’, ‘The Toronto Declaration: Protecting the rights to equality and non-discrimination in machine learning systems’, etc… — on ethical approaches to Artificial Intelligence (AI) and Machine learning (ML). Overall, the researchers’ aim was to uncover assumptions and common themes across the statements to spot which, among those, foster ethical discourse and what hinder it. This article by no means attempts to reproduce the content of the researchers’ paper. Conversely, it aims at building on some interesting considerations that emerged from the paper and that, in my opinion, deserve further scrutiny.
Continue reading "The ethics of AI and ML ethical codes"

On the Myth of AI Democratization

Co-written by Vincenzo Lomonaco and Marta Ziosi

“The world’s most valuable resource is no longer oil, but data.” — Copyright © David Parkins, The Economist [1]
The last decade has witnessed tremendous advancements in the context of Artificial Intelligence (AI) to the point that many are framing it not only as a groundbreaking technology but even as “the new electricity” echoing the unique impact its analogue counterpart had and still has on our society. Despite the great hype and inflated hopes for the imminent future, it is undeniable that recent advances in AI under the name of “Deep Learning” or the more recent rebranding “Differentiable Programming” have radically pushed the boundaries of what’s possible, enabling a rich set of applications which were even unthinkable before. AI technologies are now employed in almost any digital product or service we daily use (movie recommendations, on-line shopping, smart home devices, surveillance systems, etc…) but also for ground-braking, innovative frontiers like self-driving cars, personalized health-care and many others. In a context in which many have already expressed concerns about the power and pervasiveness of such technologies [1][2], major IT companies are publicly declaring their will to democratize AI, making it “for every person and every organization” [3][4][5], but also open to developers and researchers around the world through the transparent works of their top-notch AI research labs [6][7][8]. In this (not-so-brief) post we give a better look at this AI democratization process, hoping to spark a new interest in the subject and start talking more about something we think is going to strongly affect our present and future society. Continue reading "On the Myth of AI Democratization"