Mapping initiatives on AI ethics

By Nicole Blommendaal, in collaboration with Lea, Bijal Mehta, Marco-Dennis Moreno and Marta Ziosi.

In this article, you can find a list of interesting initiatives that work to truly make AI a tool for the social good.

Brought to you by:

Follow us on LinkedIn, Facebook, Twitter or Instagram to check all the other projects that we launch.

There is a growing momentum coming from the academic, private, and public sectors to define what the principles by which AI should be governed and designed are. While AI systems are subject to relevant ethical concerns, the efforts of developers, governments and policy makers are alone insufficient to address those concerns in their complexity and in their consequences on the wider population.

In this respect, we think that Civil Society initiatives are key to ensuring that the most fundamental layer of society, citizens, can meaningfully shape the systems that affect them.

We, AI for People, have taken inspiration from the ethical concerns presented in the paper “The Ethics of Algorithms: Mapping the Debate” by Mittelstadt et al. (2016) to start a repository of Civil Society initiatives that are actively working on AI ethical principles. The principles are Accuracy & Robustness, Explainability & Transparency, Bias & Fairness, Privacy, and Accountability.

We here present you with a starting repository of what we consider meaningful Civil Society initiatives in the field of AI ethics, working on these principles. Often, one initiative is concerned with more than one principle, so overlap is to be expected. Most importantly, this short article is by no means an exhaustive representation of the Civil Society ecosystem.

It is rather a starting point for citizens to find out how to become active in the AI Ethics sphere and it is an invitation to other Civil Society initiatives to help us expand our repository by adding their name or other initiatives’ name here.

If you are interested, you can check-out our broader efforts on AI Ethics by visiting our website section on Ethical AI.

Without any further delay, here are the initiatives we’ve found:

On Accuracy & Robustness

Picture from the IDinsight website: https://www.idinsight.org/innovation-team-projects/data-on-demand
  • Data on Demand is an initiative — currently focused in India with a possible future expansion to sub-Saharan Africa — by IDInsight, a research organisation identifying itself as ¨helping development leaders maximise their social impact¨, which aims to develop new approaches to survey data collection with the goal of making this collection radically faster and cheaper. Major surveys in India can take a year to implement and the wait for this data can take up to 4 years. Data on demand aims to significantly optimise this process.
  • The team carries their mission out through building robust targeting tools (sampling frames) by leveraging electoral databases and satellite imagery combined with a custom machine learning model which automatically detects data quality issues and by developing a totally automated survey deployment it is aiming to provide a more efficient alternative to the current surveying system. The organisation is also building machine learning algorithms to predict in real-time which surveyors are collecting high-quality data and which need to be retrained or let go. The purpose of all of this is to increase the accuracy and efficiency of the surveying system.
  • Reach them via: Email & twitter

On Transparency & Explainability

  • AlgorithmWatch is a non-profit and advocacy organization based in Berlin, Germany, whose work involves keeping watch and shedding light on the ethical impact of algorithmic decision-making (ADM) systems around the world. AlgorithmWatch believes that “the more technology develops, the more complex it becomes”, but that “complexity must not mean incomprehensibility”. By explaining the effects of algorithms to the general public, creating a network of experts from different cultures and disciplines, and assisting in the development of regulation and other oversight institutions, AlgorithmWatch is driven to keep AI and algorithms accountable when they’re used in society. New and notable projects include they’re mapping of COVID-19 ADM systems and they’re 2020 Automating Society Report which analyzes ADM applications in Europe’s public sphere.
  • Reach them via: Email, Twitter, Instagram and Facebook

On Bias and Fairness

  • EqualAI is not only a nonprofit organization but also a movement working towards reducing unconscious bias in AI development and use. Their mission is to work together with companies, policy makers and experts to reduce bias in AI. EqualAI pushes for more diversity in tech teams and addresses existing biases in the hiring process already. They bring experts, influencers, technology providers and businesses together to write standards on how to create unbiased AI. These standards are aimed at getting brand buy-in and commitments to follow them.
  • Reach them via: Email &Twitter

Other bias initiatives are: Black in AI, Data Science Nigeria, Miiafrica, Indigenous AI and Q: The genderless Voice. Other fairness initiatives are: Black Girls Code, Data Justice Lab, AI and Inclusion, Open Sources Diversity and Open Ethics.

On Privacy

  • The World Privacy Forum is a nonprofit, non-partisan public interest research group that operates both nationally (US) and internationally. The organization is focused on conducting in-depth research, analysis, and consumer education in the area of data privacy, and focuses on pressing and emerging issues. It is among one of the only privacy-focused NGOs conducting independent, original, longitudinal research. World Privacy Forum research has provided insight into important issue areas, including predictive analytics, medical identity theft, data brokers, and digital retail data flows, among others. Areas of focus for the World Privacy Forum include technology and data analytics broadly, with a focus on health care data and privacy, large data sets, machine learning, biometrics, workplace privacy issues, and the financial sector.
  • Reach them via: Email, Twitter and Facebook

Other privacy initiatives are: Big Brother Watch, Future of Privacy Forum and Tor Project.

On Accountability

  • The Algorithmic Justice League (AJL) is a cultural movement and organization that works towards an equitable and accountable AI. Their mission is to raise public awareness about the impact of AI but also to give a voice to the impacted communities. One of their core pillars is to call for meaningful transparency. Here, the Algorithmic Justice League aims to have a knowledgeable public that understands what AI can and cannot do. Furthermore, because they believe individuals should understand the processes of creating and deploying AI in a meaningful way, they too organize workshops, talks, exhibitions, and head various projects. The Algorithmic Justice League is also extremely active in the field of Bias, here below. AJL’s founder, Joy Buolamwini, is in fact part of the documentary “Coded Bias”.
  • If you want to learn more about tools and resources that address a lack of transparency visit their website.
  • Reach them via: Email

Other accountability-related initiatives are: Access Now, Open Rights Group, Digital Freedom Fund, and AWO Agency.


Mapping initiatives on AI ethics was originally published in AI for People on Medium, where people are continuing the conversation by highlighting and responding to this story.

Layers of Responsibility for a better AI future

Do we really understand responsibilities in Artificial Intelligence or are we confusing terms and technology in the debate? (Photo credit: https://flic.kr/p/27pq9bw)

This blog post has not been written by an AI. This blog post has been written by a human intelligence pursuing a PhD in Artificial Intelligence. Although, the first sentence seems to be trivial it might not be so in the near future. If we can no longer distinguish a machine from a human during a phone call conversation, as Google Duplex has promised, we should start to be suspicious about textual content on the web. Bots are already made responsible for 24% of all tweets on Twitter. Who is responsible for all this spam?

But really, this blog post has not been written by an AI — trust me. If it were, it would be much smarter, more eloquent and intelligent, because eventually AI systems will make better decisions than humans. And the whole argument about responsible AI, is more of an argument about how we define better in the previous sentence. But first let’s point out, that the ongoing discussion about responsible AI often conflates at least two levels of understanding algorithms:

  • Artificial Intelligence in the sense of machine learning applications
  • General Artificial Intelligence in the sense of an above-human-intelligence system

This blog post does not aim to blur the line between humans and machines, neither does it aim to provide answers to ethical questions that arise from artificial intelligence. In fact, this blog post simply tries to contrast the conflated layers of AI responsibility and presents a few contemporary approaches at either of those layers.

Artificial Intelligence in the sense of machine learning applications

In recent years, we have definitely reached the first level of AI that does already present us with an ethical dilemma in a variety of applications: Autonomous cars, automated manufacturing and chatbots. Who is responsible for the accident by self-driving car? Can a car decide in face of a moral dilemma that even humans struggle to agree on? How can technical advances can be combined with education programs (human resource development) to help workers practice new sophisticated skills so as not to lose their jobs? Do we need to declare online identities (is it a person or a bot?). How do we control for manipulation of emotions through social bots?

These are all questions that we are already facing. The artificial intelligence that gives rise to these questions is a controllable system, that means that its human creator (the programmer, company or government) can decide how the algorithm should be designed such that the resulting behaviour abides whatever rules follow from the answers to the given questions. The responsibility is therefore with the human. The same way we sell hammers, which can be used as a tool or abused as a weapon, we are not responsible for malicious abuse of AI systems. Whether for good or bad, these AI systems show adaptability, interaction and autonomy, which can be layered with their respective confines.

Chart taken from Virginia Dignum, Associate professor on Social Artificial Intelligence at TU Delft — Design and evaluation of human agent teamwork.

Autonomy has to act within the bounds of responsibility, which includes the chain of responsible actors: If we give full autonomy to a system, we cannot take responsibility for its actions, but as we do not have fully autonomous systems yet, the responsibility is with the programmers followed by some supervision, which normally follows company standards. Within this well-established chain of responsibility that is in place in most industrial companies, we need to locate the responsibilities for AI system with respect to their degree of autonomy. The other two properties, adaptability and interaction, do directly contribute to the responsibility we can have over a system. If we allow full interaction of the system, we lose accountability, hence we give away responsibility again. Accountability cannot only be about the algorithms, but about the interaction must provide an explanation and justification to be accountable and consequently responsible. Each of these values is more than just a difficult balancing act, they pose intricate challenges in their very definition. Consider explainability of accountable AI, we already see the surge of an entire field called XAI (Explainable Artificial Intelligence). Nonetheless, we cannot simply start explaining AI algorithms on the basis of their code for everyone, firstly we need to come up with a feasible level of an explanation. Do we make the code open-source and leave the explanation to the user? Do we provide security labels? Can we define quality standards for AI?

The latter has been suggested by the High-Level Expert Group on AI of the European AI Alliance. This group of 52 experts includes engineers, researchers, economists, lawyers and philosophers from an academic, non-academic, corporate and non-corporate institutions. The first draft on Ethics Guidelines For Trustworthy AI proposes guiding principles, investment and policy strategies and in particular advises on how to use AI to build an impact in Europe by leveraging Europe’s enablers of AI.

On the one hand, the challenges seem to be broad and coming up with ethics guidelines that encompass all possible scenarios appears to be a daunting task. On the other hand, all of these questions are not new to us. Aviation has a thorough and practicable set of guidelines, laws and regulations that allow us to trust systems which already are mostly autonomous and yet we do not ask for an explanation of its autopilot software. We cannot simply transfer those rules for all autonomous applications, but we should concern ourselves with the importance of those guidelines and not condemn the task.

General Artificial Intelligence in the sense of an above-human-intelligence system

In the previous discussion, we have seen that the problems that arise from the first level of AI systems does impact us today and that we are dealing with those problems one way or the other. The discussion should be different if we talk about General Artificial Intelligence. Here, we assume that at some point the computing power of a machine supersedes not only the computing power of a human brain (which is already the case), but gives rise to an intelligence that supersedes human intelligence. At this point, it has been argued that this will trigger an unprecedented jump in technological growth, resulting in incalculable changes to human civilization — the so-called technological singularity.

In this scenario, we no longer deal with a tractable algorithm, as the super-intelligence will be capable of rewriting any sort of rule or guideline it deems to be trivial. There would be no way of preventing the system to breach any security barrier that has been constructed using human intelligence. There are many scenarios which predict that such an intelligence will eventually get rid of humans or will enslave mankind (see Bostrom’s Superintelligence or The Wachowskis’ Matrix Trilogy). But there is also a surge of serious research institutions, which aim to argue for alternative scenarios and how we can align such an AI system with our values. We see that this second level of AI has much larger consequences with questions that can only be based on theoretical assumptions, rather than pragmatic guidelines or implementations.

An issue that arises from the conflation of the two layers is that people tend to mistrust a self-driving car, as they attribute some form of general intelligence to the system that is not (yet) there. Currently, autonomous self-driving cars only avoid obstacles and are not even aware of the type of object (man, child, dog). Furthermore, all the apocalyptic scenarios contain the same sort of fallacy, they argue using human logic. We simply cannot conceive a logic that would supersede our cognition. Any sort of ethical principle, moral guideline or logical conclusion we want to attribute the AI with, has been derived from thousands of years of human reasoning. A super-intelligent system might therefore evolve this reasoning within a split second to a degree of which it would takes us another thousands of years understanding this step. Therefore, any sort of imagination we have about the future past the point of a super-intelligence is as imaginative as religious imaginations. Interestingly, this conflation of thoughts has lead to the founding of “The Church of Artificial Intelligence”.

Responsibility at both levels

My responsibility as an AI research is to educate people about the technology that they are using and the technology that they will be facing. In case of technology that is already in place, we have to disentangle the notion of Artificial Intelligence as an uncontrollable super-power, which will overtake humanity. As pointed out, the responsibility for responsible AI is with the governments, institutions and programmers. The former need to set guidelines and make sure that they are being followed, the latter two need to follow them. At this stage, it is all about the people to create the rules that they want the AI to follow.

Artificial intelligence is happening and it will not stop merging with our society. It is probably the strongest change of civilization since the invention of the steam engine. On the one hand, the industrial revolution lead to great progress and wealth for most of humankind. On the other hand, it lead to great destruction of our environment, climate and planet. These were consequence we did not anticipate or were willing to accept, consequences which are leading us to the brink of our own extinction if no counter-action will be taken. Similar will be true for the advent of AI that we are currently witnessing. It can lead to great benefits, wealth and progress for most of the technological world, but we are responsible that the consequence are not pushing us over the brink of extinction. Even though we might not be able to anticipate all the consequences, we as the society have the responsibility to act with caution and thoroughness. To conclude with Spiderman’s Uncle’s words “with great power comes great responsibility”. And as AI might be the greatest and last power to be created by humans, it might be too great of a responsibility or it will be smart enough to be responsible for itself.


Layers of Responsibility for a better AI future was originally published in AI for People on Medium, where people are continuing the conversation by highlighting and responding to this story.

The ethics of algorithmic fairness

Once applied to risk assessment in the criminal justice system, are we deceiving ourselves on the wrong track?

This article questions the current undertakings of the ethical debate surrounding predictive risk assessment in the criminal justice system. In this context, the ethical debate currently revolves around how to engage in practices of predicting criminal behaviour through machine learning in ethical ways [1]; for example, how to reduce bias while maintaining accuracy. This is far from fundamentally questioning for which purpose we want to operationalise ML algorithms for; should we use them to predict criminal behaviour or rather to diagnose it, intervene on it and most importantly, to better understand it? Each approach comes with a different method for risk assessment; prediction with regression while diagnosis with causal inference [2]. I argue that, if the purpose of the criminal justice system is to treat crime rather than forecast it and to monitor the effects on crime of its own interventions — whether they increase or reduce crime — , then focusing our ethical debates on prediction is to deceive ourselves on the wrong track. Let us have a look at the present situation.
https://www.technologyreview.com/s/607955/inspecting-algorithms-for-bias/
Continue reading "The ethics of algorithmic fairness"

The ethics of AI and ML ethical codes

This post was inspired by the reading of ‘Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning’ by Daniel Green, Anna Lauren Hoffmann and Luke Stark [1]. The paper analysed public statements issued by independent institutions — varying from ‘Open AI’, ‘The Partnership on AI’, ‘The Montreal Declaration for a Responsible Development of Artificial Intelligence’, ‘The Toronto Declaration: Protecting the rights to equality and non-discrimination in machine learning systems’, etc… — on ethical approaches to Artificial Intelligence (AI) and Machine learning (ML). Overall, the researchers’ aim was to uncover assumptions and common themes across the statements to spot which, among those, foster ethical discourse and what hinder it. This article by no means attempts to reproduce the content of the researchers’ paper. Conversely, it aims at building on some interesting considerations that emerged from the paper and that, in my opinion, deserve further scrutiny.
Continue reading "The ethics of AI and ML ethical codes"