IP19: What might AI mean in a year of high octane elections?
Electoral risks mean looking forward at emerging threats, rather than projecting earlier risks
Apologies for the hiatus in posting. It’s clearly been a hugely dramatic few weeks in tech policy and in AI in particular. The first AI Summit at Bletchley Park, as well as the White House Executive Order on AI marked a subtle shift in the direction of AI regulation (with a tilt towards safety and away from innovation). The drama at Open AI last week is possibly enough to fill a few novels, never mind one Substack.
More on all of this to come in future weeks…
This week, with a bumper year of high profile elections coming up, it’s important to consider the potential impact of AI on elections. Particularly with its power to impersonate, to utilise the most effective language and to create highly realistic deepfakes - all of which can be done both rapidly and at scale. The key is not simply to look at electoral risks in the past and magnify using AI. Instead it’s to consider what new electoral risks might emerge as technology develops.
Please drop me a line david@artemonstrategy.com
The impact of emerging tech on elections
There were two very different social media stirs involving British politicians over the past few months. The first followed the Prime Minister’s visit to the Great British Beer Festival. A photograph circulated of Rishi Sunak pouring a badly poured pint, with a member of staff behind him rolling her eyes. The photograph was clearly a deepfake (and a bad one at that), but that didn’t stop it being shared far and wide on social media, including by Members of Parliament.
A few months later, an audio file spread of Labour Party leader allegedly swearing at staff. It was also clearly an audio deepfake, but was rapidly shared by many and continued to be shared long after it had been exposed as a fake. And only last weekend a faked audio of Mayor of London, Sadiq Khan, was reported as being circulated amongst extreme right groups online, with the Police saying that they had no powers to stop it.
These events happened during a relatively quiet period in politics, when most of the British public aren’t really engaging in politics. Could you imagine the stir that such deepfakes could cause in the heat and frenzy of an election campaign?
And the UK certainly isn’t the only country facing the issue. The recent Slovakian elections were full of deepfakes. The Ron DeSantis campaign in the United States created a deep fake video of Donald Trump embracing Anthony Faucci and the far-right AFD in Germany created dystopian deep fakes.
Thinking carefully and seriously about the impact AI might have on elections and how that impact can be properly reduced is something that tech, governments and civic society need to reflect on. And those threats might not be obvious ones like deepfakes or supercharged fake news. Instead they might be more reliant on AI’s ability to understand language and inform targeting.
Election day as judgement day
Political philosopher Karl Popper famously said that, in a democracy, “election day ought to be a day of judgement.” The author of The Open Society and Its Enemies saw in democracy the ability to change government peacefully and for electorates to provide a definitive verdict on a government.
The next year will see a number of these “days of judgement”, with a series of keenly fought elections that will test the ability of AI to strengthen democracy. The United States and the UK will both have elections late next year (although the UK PM could potentially stretch things out to January 2025) and both have had challenges with disinformation and outright conspiracies in previous cycles. India, which has seen disinformation spread via messaging apps in previous elections, also faces an election next Spring. And Taiwan, with obvious challenges around misinformation, has an election in January. The European Union has elections for the European Parliament in June next year, with many countries in the bloc being targets of disinformation campaigns. The result of the Dutch election last week illustrated the controversy that the upcoming set of elections are likely to raise.
Modern trends have already seen elections become much more prone to disinformation and conspiracies and AI, with its potential for impersonation, accuracy and rapid scale.
A massive increase in polarisation in a number of democracies (notably the US, but also countries like the UK, Brazil and France) has meant that some voters are more inclined to think the worst of their ideological opponents. A decline in trust in a variety of institutions, including traditional media, has meant that more people are more likely to believe alternative sources and often sources that support a previous ideological worldview.
Technological advance and geopolitical trends have accelerated the trend. The growth of user-generated content has, as such, been both a boon and a downside - enabling a wider variety of sources, but also enabling the dissemination of disinformation and poor quality content. Crucially, geopolitical trends have also led to a rise in state-sponsored disinformation and attempts to diminish the legitimacy of democracies and democratic elections. Deglobalisation, a “democratic recession” in many parts of the world and the return of war to Europe and military stand-offs around the world have also accelerated disinformation as a state-driven tactic.
And Artificial Intelligence could potentially accelerate all of this. But it’s important to remember that tomorrow’s election risk mightn’t simply be a bigger version of yesterday’s.
AI and election risk
Several of the factors that make Artificial Intelligence so powerful and transformative also mean that election integrity could be most impacted if the correct guardrails aren’t quickly put in place. Many of these threats aren’t new threats, but do represent a considerable acceleration of the potential scope and scale of existing issues. This is across a variety of areas:
Targeting. The ability to specifically target campaign messaging to key groups of voters has been the holy grail of election campaigns since the dawn of democracy itself. John Wanaker’s aphorism that “half of my advertising money is wasted, I just don’t know which half” if often repeated in digital marketing but is never truer than in the field of political campaigning. The ability to target and tailor a message based on the priorities of an individual voter is made almost effortless by advancing AI technology, meaning that tech will be more and more ingrained in election campaigns. This also, of course, leaves the technology much more open to manipulation by bad actors.
Mimicry, impersonation and understanding of most impactful language. One of the initial Chat GPT phase was the gift for mimicry and impersonation. It’s not hard to see the step to using impersonation to spread propaganda in order to support influence campaigns, as well as being able to use LLMs to develop more impactful and persuasive language. As a Stanford report noted, influence operations can use the ability to successfully mimic and impersonate in order to both personalise propaganda and increase its potential impact. And academics have already found that “AI-generated messages were at least as persuasive as human-generated messages across all topics”, leaving many participants “significantly more supportive” of a wide range of arguments after reading AI written texts.
Deepfakes. The ability to generate highly realistic fake images and audio is one of the most notable features of this wave of generative AI and is something that we have noted in a number of Inflection Points. The increasing accuracy of these fakes is combined with an increased ability to share them rapidly, so that their potential to upend an election campaign is multiplied.
Disinformation at scale. The advent of AI means that disinformation can be created and spread more easily than ever before. Previous waves of “fake news” were spread by infamous groups, such as the “fake news farms” in Macedonia or the Chinese social media trolls nicknames the “50 cent army”. Today misinformation can be created at the touch of a button and without the need to employ an army of trolls. It can also be spread much more rapidly and extensively than was the case before. As a Stanford report argued:
For malicious actors looking to spread propaganda—information designed to shape perceptions to further an actor’s interest—these language models bring the promise of automating the creation of convincing and misleading text for use in influence operations, rather than having to rely on human labor. For society, these developments bring a new set of concerns: the prospect of highly scalable—and perhaps even highly persuasive—campaigns by those seeking to covertly influence public opinion.”
Everything is accelerated. As I’ve noted on a number of occasions, AI has the real potential to supercharge the beneficial impact of tech on society and the economy. But, without sufficient vigilance, it also has the potential to accelerate negative trends around social media. Most notably, as The Economist set out recently, it can simply produce more propaganda content that might be able to reach more people via social media, messaging apps or other means. AI will also potentially enable virality to be tested and made more powerful.
What does this mean for business and governments?
The coming year of high-profile elections could play an important role in determining societal expectations around Artificial Intelligence. As I’ll be returning to in future Inflection Points, the shape and nature of any emerging AI regulation is likely to be impacted by high profile events that shape the public and political mood around the technology.
And elections are events that amplify emotions and could shift the public debate.
The potential impact of AI on elections has been recognised by a number of senior figures in tech, including Sam Altman who said that AI and election integrity is a “significant area of concern”. It should be noted that large tech companies have already made great strides in tackling disinformation and fake news. The steps already taken by tech companies are concrete and important, such as the steps by Alphabet, Meta and Microsoft to insist that digitally generated political content carries “watermarks”. Such moves show that the potential issue is being taken seriously. Political campaigns and their proxies might also look to declare a truce on use of deepfakes during a campaign (although this is more likely in some countries than others).
But, in election integrity, the nature of the threat might be changing as rapidly as the technology itself. The threat to election integrity in 2024 might not necessarily be the same as that in 2016 or 2020. Impersonation and different means of virality (via various messaging apps, rather than purely social media) might be more important than deepfakes of fake news. The ability of AI to understand and craft language that is most persuasive in a targeted way could be more impactful still if used by bad actors in propaganda and influence campaigns.
This means that being vigilant about emerging threats is as important as considering the impact of acceleration of previously known threats. As we have always argued in various Inflection Points, considering how technology and other forces can shape the future can be a much more powerful tool than simply fighting the last war.