Uchenna Nnawuchi, Dr Carlisle George, and Dr Florian Kammueller
In the contemporary landscape dominated by the ascendency of artificial intelligence and decision-making machine learning algorithms, the imperative of understanding and justifying the outcomes generated by these systems has sparked profound discourse on the nuanced concept of a right to explanation. This presentation delves into the multifaceted dimensions of such a right, tracing its genesis and significance against the backdrop of the burgeoning influence of algorithmic decision-making. A meticulous evaluation of existing legal frameworks on Artificial Intelligence including international human rights law, case law, and soft law serves as a crucible for scrutinising the historical foundations germane to the development of the right to explanation as well as providing a legal ground and reason for this right.
A critical component of this analysis is the justification of the right, exploring its inherent necessity, importance, implications, and associated limitations. A proposal for a comprehensive right to explanation is articulated, emphasising its potential to address the complex ethical and legal quandaries posed by algorithmic decision-making. Furthermore, the article conducts a judicious comparative analysis, deftly juxtaposing the right to explanation with the duty to give reasons and drawing insights from the French legal code.
Finally, this presentation gives a compelling argument for adopting a robust right to explanation, asserting its relevance in safeguarding a myriad of human rights interests and constitutional principles. By examining the tapestry of the existing legal landscape and proposing a forward-looking framework, the work presented contributes to the ongoing discourse on reconciling algorithmic decision-making with legal and ethical imperatives.