Policies

The promises offered by AI/ML technologies can raise important ethical & policy concerns among the population, which trigger the need for a set of values to be identified by policy-makers to be ensured through a set of tools and actions by AI/ML practitioners. The logic seems clear. However, these elements are rarely presented in relation to each other. AI for People decided to map ethical concerns, to values, to tools in order to create a comprehensive road-map to link citizens, to policy-makers to AI/ML practitioners.

Road-Map

Inconclusive Evidence

Algorithmic conclusions are probabilities and therefore not infallible and they also might incur in errors during execution. This can lead to unjustified actions.

Accuracy and Robustness

Inscrutable Evidence

A lack of interpretability and transparency can lead to algorithmic systems that are hard to control, monitor, and correct. This is the commonly cited ‘black-box’ issue


Explainability and Transparency

Misguided Evidence

Conclusions can only be as reliable (but also as neutral) as the data they are based on, and this can lead to bias

Bias

Unfair Outcomes

An action could be found to be discriminatory if it has a disproportionate impact on one group of people.

Transformative Effects

Algorithmic activities, like profiling, can lead to challenges for autonomy and informational privacy. 

Traceability

It is hard to assign responsibility to algorithmic harms and this can lead to issues with moral responsibility.

accountability

Great part of this work was possible thanks to these resources:

Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2019). From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices. Science and Engineering Ethics, 1-28.

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society,3(2), 205395171667967. https://doi.org/10.1177/2053951716679679.