Accuracy & Robustness

To counter
Inconclusive Evidence

Algorithmic conclusions are probabilities and therefore not infallible and they also might incur in errors during execution. This can lead to unjustified actions.

Let us take the example of an algorithm used to predict the risk of heart failure. Algorithms are never 100% accurate. In case the algorithm has an accuracy of 85%, 15 patients out of 100 will be incorrectly diagnosed.

Tools for Accuracy & Robustness

Adversarial Robustness Toolbox

The Adversarial Robustness Toolbox (ART) is a python library designed to support researchers and developers in creating novel defence techniques, as well as in deploying practical defences of real-world AI systems. It is primarily focused on improving the adversarial robustness of visual recognition systems – but there are plans to further develop it.

Get involved!

AI for People is open for collaborations, funding and volunteers to make them reach a more mature stage. Help us to make them become a reality!

Receive Updates

Slack Channel
Join our discussions

Attend & meet us

Support AI for People