AI Ethical Principles are guidelines put forward by policy-makers that can, in the words of Turilli, act ‘as abstractions, as normative constraints on the do’s and don’ts of algorithmic use in society’ (2007). Examples of this are manifold; from fairness to explicability, robustness, privacy, security, justice, autonomy.
Ideally, these guidelines stand as evidence of the cardinal ethical values we want AI systems to reflect. Practically, several published guidelines have also shown evidence of an emerging consensus among policy-makers.
A review of 84 ethical AI documents by Jobin et al. (2019) found that no single ethical principle featured in all of them. Nevertheless, themes of transparency, justice and fairness, non-maleficence, responsibility and privacy appeared in over half. Additionally, themes of privacy, security, autonomy, justice, human dignity, control of technology and the balance of powers, were recurrent (Royakkers et al., 2018).
More than 70 documents on AI Ethical guidelines, have been published in the last three years by a multiplicity of stakeholders; the Industry (Google, IBM..), government (Montreal Declaration, HLEG from EC, US Government), intergovernmental institutions (OECD), academia (Future of Life Institute, IEEE..).