AI is not a novel form of intelligence. It is a form of agency.  Decoupling intelligence from agency in the e-Age where society shapes AI-Assemblages, and AI-Assemblages shape the public manifestation of patterns of thought and behaviour, produces significant challenges regarding bias, fairness, privacy, trust, autonomy, responsibility, transparency, and explainability. AI has altered society, and society is fashioning AI, and those changes will only increase exponentially. Nothing is in its way.

AlgorithmWatch is a human rights organization based in Berlin and Zurich focusing on building a world where algorithms and AI do not weaken justice, democracy, and sustainability, but can strengthen them. AlgorithmWatch’s global portfolio of principles, voluntary assurances, and charters for the ethical use of algorithms and AI has 167 guidelines. The inventory was last updated in 2020. Without consensus, “principle proliferation” has led to a cottage industry that threatens to confuse and overwhelm. The result is repetition, redundancy, confusion, and ambiguity.

On Oct. 4, 2022, the White House in the USA announced the Blueprint for an AI Bill of Rights (AIBoR). The AIBoR is nonbinding and does not create any legal right for anyone. It highlights the interest of the White House to tackle a battery of issues beyond privacy and anti-discrimination. The AIBoR also delineates two other human rights principles: the right to be protected from unsafe or ineffective AI systems, and the right to receive notice and explanation of algorithmic decisions impacting individuals’ lives.

The “unsafe or ineffective systems” principle protects people from AI systems that pose a risk to their safety. For example, AI devices have been used to help stalkers engage in harassment and abuse. This principle is coupled with, the right to life and the right to security of the person, guaranteed by Article 3 of the Universal Declaration of Human Rights.

The aim of the “notice and explanation principle” is to protect the rights of people affected by automated decisions.  Persons must receive clear and valid explanations of outcomes to determine whether to appeal automated decisions. This principle is analogous to the right to due process protected by the U.S. Constitution.

The European Union has also moved forward with legislation recognizing AI’s effect on human rights. The EU AI Act was endorsed by EU Member States on 2nd February 2024, and by the European Parliament on 13th March 2024.  The AI Act aims to “enhance and promote the protection” of nine rights guaranteed by the EU’s Charter of Fundamental Rights. The Digital Services Act (DSA) and the Digital Market Act (DMA) form a single set of rules that have two main goals: (1) to create a safer digital space and (2) to foster innovation, growth, and competitiveness.

It is good that AI regulators are considering rights beyond privacy and anti-discrimination. However, these commitments are unlikely to make a difference without standardized processes to meaningfully assess how AI impacts individuals’ human rights, and how to lessen these effects when they pose harm. This has led to several international policy frameworks recommending explainability as a requirement for any ML system.

A review of the highest profile set of principles by Luciana Floridi at Yale and Josh Cowls at The Alan Turing Institute at Oxford has led to a concise identification of five core principles for ethical AI. These principles have deep-rooted implications for future efforts to create laws, rules, standards, and next practices in many contexts. The review titled “A Unified Framework of Five Principles for AI in Society” (2022) identifies four core principles commonly used in bioethics: beneficence, non-maleficence, and autonomy.

However, based on their comparative analysis, they have added a fifth principle: explicability, understood as combining both the epistemological sense of intelligibility (as an answer to the query “how does it work?”) and the ethical sense of accountability (as an answer to the question: “who is liable for the way it works?”). The additional principle is needed if AI is to be understood as an unprecedented form of agency.

In his essay, “Justice as Fairness: Political not Metaphysical” John Rawls (1985), posits a concept of justice that comprises two main principles of liberty and equality; the second is subdivided into fair equality of opportunity and the difference principle. The principles are arranged in “lexical priority,” prioritizing in the order of the liberty principle, fair equality of opportunity, and the difference principle, and determines the priorities of the principles if they conflict with praxis. The principles are intended as a single, inclusive conception of justice—”Justice as Fairness”—and not to function separately. How they work to produce algorithmic fairness requires interrogation.

Machine learning (ML) algorithms have made enormous advances in recent history. Some notable examples include proposing new organic molecules, predicting criminal recidivism, identifying military targets, and predictive policing. However, this has raised several tenacious questions regarding the epistemic status of algorithmic outputs. One of the most contested topics in this emerging discourse is the role of explainability. The call for more explainable algorithms has been especially urgent in areas like clinical medicine, and military operations where user trust is essential, and errors could be catastrophic.