There can be no promise of a future based on human rights outside of the difficulties that arise from the transformations of agency brought about by AI on International Human Rights Law. The content of International Human Rights Law will be reshaped by AI’s decoupling of agency and intelligence. This decoupling of decision-making power delegates authority to technological artefacts. The central view here is that AI is not a new form of intelligence, but an unprecedented form of agency.

AI is a divorce between the ability to engage in problem-finding, problem-framing, and problem-solving, and to deal with a task successfully without the need for intelligence. We freely cede part of our decision-making power when we embrace AI and its smart agency.

Affirming the principle of autonomy in the context of AI requires a balance between the authority we delegate to AI and the power humans retain. The risk for International Human Rights Law is that the growth in artificial authority may undermine the flourishing of human autonomy.

The five core principles of ethical AI are beneficence, non-maleficence, justice, explicability, and autonomy. Beneficence and nonmaleficence are not logically equivalent. Each is a separate principle. The Asilomar Principles warn against the threats of an AI arms race and the recursive self-improvement of AI.

At the same time, the Partnership Principles assert the importance of AI working within secure constraints. The IEEE cites the need to avoid misuse, and the Montreal Declaration argues that those developing AI should assume responsibility by working against the risks arising from their technological innovations. Whether it is Dr. Frankenstein or his creation against whose maleficence we should guard is uncertain.

Explicability is understood as combining both the epistemological sense of intelligibility (as an answer to the question “How does it work?”) and the ethical sense of accountability (as an answer to the question: “Who is liable for the way it works?”). Autonomy is mentioned in four of the documents outlining principles for ethical AI. The Montreal Declaration articulates a need for a balance between human and machine-led decision-making processes, and that the development of AI should promote the autonomy of all human beings.

The EGE states that autonomous systems must not impair the freedom of human beings to set their norms and standards. The AIUK assumes a thin position that the autonomous power to hurt, destroy, or deceive humans should never be vested in AI. The Asilomar document supports autonomy because humans should choose how and whether to delegate decisions to AI systems to accomplish human-selected aims.

It is clear from the taxonomies of ethical AI that the autonomy of machines should be restricted and intrinsically reversible should human independence need to be protected or re-established. Humans must decide which decisions to make and in what priority. This invokes a notion of meta-autonomy or a decide-to-delegate model with deciding to decide again being the ultimate safeguard.

Decoupling intelligence from agency in the e-Age produces significant challenges regarding bias, fairness, intelligibility (how does it work), privacy, beneficence (do only good), nonmaleficence (do no harm), justice, trust, autonomy, accountability (who is responsible for the way it works), transparency, and explicability.

The “right to be forgotten” is a novel concept in law. However, the European Court of Justice legally solidified that the “right to be forgotten” is a human right when it ruled against Google in the Costeja case on May 13, 2014. However, a balancing of liberties is necessary as the “right to know” must balance the freedom to remove or increase difficulty in accessing truthfully published information about individuals and corporations.

The Right to be Forgotten, as outlined in Article 17 of the EU General Data Protection Regulation (GDPR), poses significant challenges for AI-Assemblages. This right empowers the digital citizen to request the deletion of their personal information from databases and to stop the dissemination of private information. The question therefore arises about what happens to the AI models that have been trained on the data unbeknownst to a citizen who is now making an erasure request.

AI systems, in particular those using machine learning, depend on large datasets to train algorithms. The deletion of data upon request can impact the performance and decision-making capabilities of these AI models. A global trend of individuals exercising their right to erasure can lead to a shift in AI software rules.

AI-Assemblages can also create inferred data by combining existing data sets. This creates a deeper enigma. How do we manage inferred personal data stored in data cooperatives and data lakes under the right to be forgotten? It is also the case that the outputs of AI-Assemblages are sometimes inexplicable. This exacerbates the problems associated with compliance related to the right to be forgotten.

To deal with these problems, AI architects add “noise” to data to preserve anonymity. However, this distortion can affect the precision of AI models. Data minimization is another technique. In such cases, only the necessary data is collected. Pseudonymization is another data processing technique that makes data opaque so it can no longer be attributed to a specific data subject without additional information. Jurisprudence and AI-Assemblages are now misaligned.