In case there were some remaining doubts regarding the need for explanations within the increasing load of AI decision-support systems, this article from the United Nations forum will clarify these doubts.
The lack for explainability should even be seen as a blocker for using AI systems, especially when such systems influence or can impact the life or health or safety of humanity. The article flags as well the ethical risk if the explanations of AI systems does not help in assessing how these Machine Learning-based results are affecting people’s human rights. Including introducing bias in access rights, restricting freedom.
The High Commissioner Ms Bachelet warned “Urgent action is needed as it can take time to assess and address the serious risks this technology poses to human rights: The higher the risk for human rights, the stricter the legal requirements for the use of AI technology should be”.
About The Author: ISee Team
More posts by iSee Team