We are already at the point where we believe – Nothing. Artificial Intelligence (AI) is not a domestic matter. It is trans-border. Offshore, it has no respect for exclusive economic zones where a sovereign state has special rights. On land, it penetrates the geographic boundaries of nation-states and amplifies the problems associated with porous borders. AI even redraws boundaries as it adds to the flux of fluid loyalties in conflict zones. A “singularity” on the battlefield, where humans can no longer keep pace with the speed of machine-led decisions during combat, is already a lived experience.

A position of technological inferiority relative to others is no longer the case, as some nation states have become true peers. The UK will convene a Global Summit on AI this Autumn. To secure a place at the epicentre of what is about to unfold, “OpenAI” and “Anthropic” opened offices in London. “OpenAI” has appointed “Faculty”, a UK-based firm as their technical integration partner. Google DeepMind will be expanded under the leadership of Demis Hassabis with headquarters in King’s Cross, London. “Palantir” has selected the UK for its new European HQ for AI development. 

Palantir provides many of the world’s most critical institutions with foundational architecture for data processing. Making London an AI Talent Hub is central to the economic development strategy of the UK. But attracting talent is just one horizon in the solution ecology. Nurturing indigenous talent is another component. This strand requires a Scholarship Uplift.

The number of scholarships the UK Government funds for students undertaking post-graduate study and research at UK universities in STEAM subjects will be increased. The number of Marshall scholarships will increase by twenty-five per cent, to 50 places a year. Five new Fulbright Scholarships will be funded by the UK. These new scholarships will focus on boosting UK expertise in STEAM subjects.

The European Parliament describes AI as any system-software that can, for a given set of human-defined objectives, generate outputs such as content, forecasts, recommendations, or decisions influencing the environments they interact with.  But there is still no decision for a global AI regulator.

An AI Bill of Rights may be on the horizon. However, the present response to AI remains disjointed as the world’s first Rule Book on the use and development of AI systems remains a few scattered Banksy-like stencils in many jurisdictions.

The US is setting up a voluntary code of conduct where companies using or developing AI will be asked to sign up to a set of standards. China is building a set of guardrails that will notify users whenever AI is being used in a process. The EU is considering grading AI products depending on their impact and risk profile.

The UK’s strategy on the other hand is decentralized. It intends to ensconce any new AI regulation into existing regulatory bodies. This means, for example, that the Equal Opportunities Commission in the UK will now have to build out its own capabilities to deal with allegations related to promotions or appointments where an AI HR technological apparatus of the state may have facilitated an outcome.

Tribunals and jurists will have to retool to deal with nascent Digital Injustice. Many intractable problems will be solved using AI. However, research has raised awareness that the use of AI for facial recognition, evaluating resumes, and sorting photographs can entrench algorithmic harms and injustice.

High risk AI systems used to generate a credit score for access to loans or housing need monitoring. The challenges for police intelligence services would need to be a top priority, especially since nefarious actors are already “fully on top” of their practice and can quickly get ahead in the race to use AI.

Surveillance and predictive policing are controversial trends with ethics being a central concern. Doubts and uncertainties about the impact of AI on communities and cities regarding privacy abound, and we cannot disconnect these discussions from debates about spatial inequality.

Studies show that AI could help cities reduce crime by 30 to 40 per cent, and reduce response times for emergency services by 20 to 35 per cent. Some municipalities have invested in crowd management and real-time crime mapping. Others are making use of aerial surveillance, crowdsourcing crime reporting and Emergency Apps to ensure public safety. Data-driven policing remains a challenge, and this hinges on the Curriculum Content and Standards at Police Academies.

The “OECD.AI Policy Observatory” compiled empirical data on AI and Big Data surveillance use for 179 countries around the world between 2012 and 2022. Their findings indicate that at least 97 out of 179 countries are actively using AI and big data technology for public surveillance purposes: Smart city/safe city platforms: 64 countries; Public facial recognition systems: 78 countries; Smart policing: 69 countries; and Social media surveillance: 38 countries.

Surveillance is an old tool, but Open Data analytics makes it possible to unravel huge amounts of data on criminality and terrorism and to identify trends and patterns. When data enrichment is planned, AI is the layer that supports law enforcement agencies to better deliver their mandate and trigger behaviour change. The upheavals of AI are guaranteed to escalate, and become even more unsettling.