Mapping cross-cultural visions on Artificial Intelligence

Lethal-Autonomous Weapons, Facial Recognition Software, DeepFakes, Big Data, Self-Driving Cars, Social Media Algorithms – our lives are increasingly governed by the black boxes of Artificial Intelligence. These developments carry serious ethical implications when it comes to questions of AI safety, privacy, accountability, and ultimately governance.

There’s a wide-range of discourse around questions of making AI more ethical. However, in the same way that data sets that are used to train AI algorithms carry inherent biases within them, the social construction of the meaning of AI and all its implications are inherently biased as well. In order to overcome these biases we need to first acknowledge them, second, uncover alternative visions on AI found in different cultures, and third, establish more inclusive discourses representing this variety of perspectives.

In this spirit the presented research project acknowledges existing biases through its mere existence. Through establishing a growing archive of relevant research the project seeks to map a diverse range of cross-cultural views on Artificial Intelligence. By doing so the project seeks to foster intercultural understanding by inspiring an open and inclusive discourse around alternating views on AI.

By making diversity and inclusivity the focal points the project ultimately seeks to contribute to the establishment of more deliberative governance structures on Artificial Intelligence, which is the basis for truly making AI governance more ethical.


Project Lead: Maurice Jones

Are you interested in this project? Contact us at research@

Get involved!

AI for People is open for collaborations, funding and volunteers to make them reach a more mature stage. Help us to make them become a reality!

Receive Updates

Slack Channel
Join our discussions

Attend & meet us

Support AI for People