It is hard to assign responsibility to algorithmic harms and this can lead to issues with moral responsibility.
An example is the one of self-driving cars. This is a case in which the vehicle aims at replacing the human operator in driving autonomously. Self-driving cars raise important issues of responsibility in the case of car accidents where no human driver was behind the wheel.
Tools for Accountability
This paper by Kroll et al (2017) titled ‘Accountable Algorithms’ describes a series of technical tools that can be used by developers to help ensure that their system meets the requirement of ‘procedural regulatory’ i.e. it constantly makes decisions based on the same rule. This framework is based on software verification, cryptographic commitments, zero-knowledge proofs and fair random choices. Even though we promised to provide hands-on tools, this is our only exception as it presents tools in detail.
AI for People is open for collaborations, funding and volunteers to make them reach a more mature stage. Help us to make them become a reality!
Join our discussions
Attend & meet us
Support AI for People