The threat posed to democracy by artificial intelligence (AI) is not a dystopian messianic hallucination galloping towards us. It is a future that is already here. It is a threat that requires the drafting of national guardrails, and transnational cooperation to block interference in election outcomes. Any failure to address such disruption can only plunge nation-states into chaos, and push them behind the curve.

AI has disrupted general elections around the world. Simulacrum or the generation by models of a real without origin or reality – a hyperreal – have become novel troubles for democracy. It is not far from the truth that for decades, election campaigns have experienced a deluge of doctored audio clips, memes and photoshopped images. But the use of AI to distort the outcome of campaigns is a fresh frontier. The outcomes of elections have precipitated into attacks upon capital buildings, and claims of unjust conditions.

This requires each country to establish a Taskforce to Defend Democracy. Protecting the sanctity of the truth is not censorship. We cease to believe anything. We trust nothing anymore. Corrosive attacks on the veracity of information compensate every liar with a dividend. But “Lies have consequences”, according to Justin Nelson, immediately after a judge awarded his client a US$787.5 million settlement in the Dominion defamation lawsuit. Misinformation voyages without itineraries, lodgings, hesitations, fences or customs and border control.

Many states have established Threat Intelligence Centres. These centres expand the capacity of the state to detect and address viral disinformation networks, misinformation strategies and threat actors that dwell on an attack plain undetected as the exfiltration of targeted data takes place. Such centres illuminate how domestic and foreign actors “bury the truth with lies”, and use information operations in conjunction with new cyber-attack strategies to realize malicious goals. Threat Intelligence Centres rupture blinkered information bubbles, and iridescent balloons that are floated to distort, twist, and colour human perception.

During the election campaign in Slovakia in September 2023, a Deepfake audio file emerged of Michal Simecka, the leader of the liberal progressive party, allegedly discussing how to rig the election. Mr. Simecka lost the election. The Smer-SSD party won the election. No one can gauge how the Deepfake affected voters. It may have caused some votes to shift either way. It may have convinced others to sit on the fence, and not participate in the election at all. Eventually, the post was dismissed as fake, but nothing could erase the harm, or reverse its effect on the electorate.

Wistful poster illustrations with specific Sovietic political imagery in the iconic style of Gustav Klutsis dotted the boulevards of Buenos Aires. AI-generated images of Sergio Massa in a shirt bedecked with military ornaments gestured to a pallid blue sky as a surge of older people in drab clothing with disfigured faces — gaze toward him in hope. The style and the signal are clear. Other AI-generated images maligned candidates as zombiesqued brigands.

From chainsaws to cabinets of cloned dogs, the campaign in Argentina was crammed with disruption. The image of Javier Milei depicted as a Chinese Stalinist leader was viewed more than 30 million times. Argentina’s election was a testbed for AI in campaigns, and how AI can create realities, and amend human action. The Centre for Technology Innovation at the Washington-based Brookings Institution sees the effects of AI in politics as a troubling omen, as campaigners use AI to deliver deceptive messages.

The use of easily accessible AI technology in political campaigning is a broad global trend. While most of the AI-generated images used in recent election campaigns are satirical, and seek to operate as “Picong” or “Piquant” – light amusing banter – AI algorithms can also be trained on copious online footage to create realistic but fabricated images, audio clips, and video overlays. It is not the same thing as the Trumpian idea of “fake news”. It is a dangerous new frontier in disinformation and manipulation. It is beyond a shadow of a doubt that malevolent actors will use large language models to fabricate content. The boom of large language models appears to be a gift to makers of mischief, and malicious content.

In 2024, publicists will have to disclose when AI or other digital methods are used to alter or create political or social billboards. States are already considering legislation to prohibit the distribution of materially deceptive AI-generated videos, images and audio clips tied to political campaigns. Electoral Commissions are also looking into the regulation of AI-generated Deepfakes in political advertisements to defend democracy.

A clip of Sir Keir Starmer surfaced during the Labour Party September conference in the UK. The post was quickly denounced as fake. But by then, 1.5 million persons perused the content and framed their inferences.  In November, a Deepfake audio clip of London Mayor Sadiq Khan addressed the possibility of the postponement of Armistice Day due to discord that was seeping into European capitals from the Middle East.

Defending Democracy is now a concern for Homeland Security, Ministries of Defence, Threat Intelligence Centres, Cyber Security Centres, the Metropolitan Police, and counter-terror experts who must now monitor the circulation of AI-generated content during election campaigns.