AI Ethical Principles are guidelines put forward by policy-makers that can, in the words of Turilli, act ‘as abstractions, as normative constraints on the do’s and don’ts of algorithmic use in society’ (2007). Examples of this are manifold; from fairness to explicability, robustness, privacy, security, justice, autonomy.
Ideally, these guidelines stand as evidence of the cardinal ethical values we want AI systems to reflect. Practically, several published guidelines have also shown evidence of an emerging consensus among policy-makers.
A review of 84 ethical AI documents by Jobin et al. (2019) found that no single ethical principle featured in all of them. Nevertheless, themes of transparency, justice and fairness, non-maleficence, responsibility and privacy appeared in over half. Additionally, themes of privacy, security, autonomy, justice, human dignity, control of technology and the balance of powers, were recurrent (Royakkers et al., 2018).
More than 70 documents on AI Ethical guidelines, have been published in the last three years by a multiplicity of stakeholders; the Industry (Google, IBM), government (the High-Level Expert Group of the European Commission, US, Germany…), intergovernmental institutions (OECD), academia (Future of Life Institute, IEEE).
The efforts below would not have been possible without the fundamental, solid base set by the research conducted in the papers ‘From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices’ and of ‘The ethics of algorithms: Mapping the debate’.
AI for People is open for collaborations, funding and volunteers to make them reach a more mature stage. Help us to make them become a reality!
Join our discussions
Attend & meet us
Support AI for People