3 reasons why we probably need an “algorithms police”

From self-driving cars to smart assistants, AI is increasingly becoming part of our daily lives and often facilitates actual human-to-human and human-machine-human interaction. Good, bad or indifferent AI is here to stay for the long run. Though as AI becomes exponentially more powerful it also requires a thoughtful examination by the purveyors of AI and the powers to be. Organizations that ought to use AI in a responsible, ethical and transparent way, but how do we control, manage and enforce that?
...
If a self driving car is going left instead of right, killing someone in the process we probably want to know why it did that. More specifically, why did the algorithm decide to go left vs. right and causing an accident. Since most likely human lives will be at stake or can be affected, we’re going to need almost instantaneous reproducibility and interpretability. Answering the question: what just happened, why did it happen and who is going to be liable/responsible if things go wrong...