Once, “We were the colour of shadows when we came down with tinkling leg irons…for the silver coins multiplying on the sold horizon” (Derek Walcott, Omeros, Book Three, Chapter xxviii). Now we are certain that we are all free. After all, we are all still running.

Straining to stitch the future to our past as we recall our ashen ancestors. Some with “tinkling leg irons to join the chains of the sea,” and others, the castaways from the recruitment depots in Calcutta and Madras, crossing the Kalapani craving for the loss of every chain of caste.

From the Bight of Benin to the Bay of Bengal, footprints still stain the white sands of antipodal coasts where the traders stood tall because we were always on our knees. Together we have crossed the pandemic portal into the age of digital value chains and global warming – a future that hinges on the twin transitions of decarbonization and digitalization. From the plantation to the pandemic, we survived multiple crossings and changes.

But the climate is not just changing. It is experiencing disruption. Stefan Rahmstorf warns that there is sufficient ice on Earth to raise sea levels by 65 meters. In the autumn of 2021, UN Secretary-General António Guterres remarked that we are “seemingly light years away from reaching our climate action targets.”

The perfect cannot be the enemy of the good. But where do we begin – when the good not only fails to keep us safe but is also so far away from what we need? Arundhati Roy questions if in crossing the pandemic portal we intend to drag dead ideas, dried rivers, and hazy skies into the future. Alternatively, we can take on Adam’s task of embracing a paradisal perspective and make something of stunning originality.

Commenting on the “climate delusion,” Greta Thunberg remarks that we have been greenwashed out of our senses. Greenwashing may occur when companies use environmental fantasies to hide bad behavior or errors. With the meteoritic rise of AI-Assemblage, greenwashing is now chaperoned by audit washing, ethics washing, and participation washing.

The speed factor with which AI-Assemblage is evolving cuts into the time companies and governments may have to learn from small-scale pilots. It is not unhelpful to assume that by the next century, intelligence will escape the constraints of biology.

AI-Assemblage now creates unprecedented opportunities to generate extraordinary business value very quickly. But all of this speed dodges the view that algorithm auditing is yet to establish itself as a dependable tool for securing accountability. Weak audit trails conceal bias that stems from badly designed and poorly implemented AI models.

The participatory design of AI systems is grounded in the hands-on involvement of those affected by such designs. The underlying idea is that users should be included in the creation and maintenance of such systems. While greenwashing consists of an agglomeration of activities for the sake of market optics, participatory washing in AI falls short of taking account of power dynamics. The process remains insensitive to the idea that the opportunities for co-creation and involvement have an exploitative and extractive element, and rely on embedded power imbalances.

Jurgen Habermas of the Frankfurt School in his essay, “Discourse Ethics” (2004, pp. 145-6), develops a framework for dealing with these moral issues in democratic societies. He believes that a democratic society must develop its norms in a manner that disqualifies any norm that can not meet the approval of all those affected by it in their capacity as a participant.

This means that across Latin America and the Caribbean, the moral architecture of  AI-Assemblage must be grounded in the everyday participation of those affected by the AI systems.

This participation involves algorithmic auditing and any form of analysis of how algorithms work. It involves collecting data on how an algorithm behaves in the social context in which it is deployed and applied. The audit must uncover any negative impact on the rights or interests of those who are subjected to the algorithm.

It is an approach to citizen development that is saturated with notions of risk management, metric testing, mitigation strategies, and stakeholder well-being. The feedback spawns reverse engineering, which brings an equitable and fair result into being.

The EU High-Level Expert Group on AI is an online forum with over 4000 members from the tertiary sector, captains of industry and commerce, civil society groups, EU citizens, and bureaucrats. They argue that Trustworthy AI can be achieved using ethical principles

  • Respect for human autonomy
  • Prevention of harm (non-maleficence)
  • Fairness/justice
  • Explicability and
  • Beneficence

However, trust assumes the trustee’s receptiveness to the attitudes of the trustor. The common sense humans have accumulated from experiences with AI-driven drone warfare, nuclear energy, pandemics, and genetically modified food is that new technologies for navigating uncertainties require analysis of safety, risk-benefit ratios, and careful attention to the narratives, perceptions, and attitudes of the participants affected.

One underexplored question is whether trustworthiness is something that can be possessed by machines.  If untrustworthy corporations and governments behave badly, the only avenue open to well-functioning AI is to enable more well-ordered unethical conduct.