Getty Images

Why AI is talking politics this year

AI is already playing a part in this year’s General Election - for good and bad

We might not realise it, but artificial intelligence (AI) is already playing a part in this year’s General Election - for good and bad. One candidate is using an AI representative to answer constituents’ questions, and the BBC has reported on fake AI generated content on TikTok and deepfakes on X.  The true scale of AI’s impact on the General Election will only be determined in the weeks and months after the Election.

The latest wave of AI has the potential to increase political engagement, which is particularly low amongst young people and minority ethnic groups. Generative AI tools like ChatGPT could help explain political systems, summarise manifesto pledges, and encourage under-represented groups to go to the ballot box. AI also has potential benefits for political parties and independent candidates, helping them to target voters effectively and making campaigns less costly to run, which levels the playing field for the smaller players. While this targeting has some benefits, it also risks the polarising and narrowing of campaign content seen by individuals, and we should consider the privacy rights of the public when it comes to their private interests.

But AI also has extraordinary potential to influence people’s thoughts, behaviours, and votes. We won’t know until after the election how the political parties and campaign groups have used AI to target voters – or how they have chosen to serve messages that support their campaigns. In previous elections - both in Europe and abroad - data-centric technologies like AI have been used to flood social media platforms with personalised and targeted ads, which often contained half-truths and dubious claims. There are already suggestions that this is happening in the UK, with the BBC flagging suspicious activity by Reform on TikTok.

AI is constantly evolving and gives anyone with access to a smartphone the ability to create and spread misinformation - should they wish to do so. People are more aware that these capabilities exist and have learned how to use them. There is increasing evidence that AI has been and will be used to generate realistic deepfakes at scale, create and spread disinformation, and target voters with messages that reinforce harmful or untruthful messages to a level not seen before.

We have robust electoral law in the UK, where our research and engagement are based. Many of the offences, ills and evils that can be committed are already defined, although we do not currently have definitions relating to AI in UK law. However, it's possible that ours and other state institutions could be overwhelmed by the sheer volume of breaches and rule-breaking. We need to ensure that our regulators and institutions have guidance on applying current regulations to AI and the resources and technical expertise to understand how and when to enforce the rules and be able to do so in as close to real-time as possible. This is particularly challenging in a sector like AI that commands impressive salaries, which government and regulators cannot always offer, and where they are already experiencing significant skills shortages.

Even where there is no ill intent in the application of generative AI, there is still a risk of embedding, amplifying, and entrenching biases - biases that can exist in the data on which AI systems are trained. Vast swathes of data are harvested from social media platforms and used to train AI - but not enough is being done to assure it is accurate and representative. At the same time, the technology is at an inflexion point: a large majority of curated data sources like archives, libraries, and media content have already been used to train the current, less-than-perfect AIs on the market. Improvements, for instance, in the form of more truthful, verified content, cannot be achieved if newer releases of these AIs rely on the flood of synthetic content that we already see on social media, so there is a risk that we might see the technology worsening if it begins to rely on AI-generated data.

Finding a solution is not something for the government alone to consider. It must involve civic society, private tech companies, citizens and consumers. Companies will need to assure their data and be more open about the data they feed their AI algorithms. As consumers and as a society, we must be much more critical and question the fundamental origin of the information and the data on which it is based. This starts with equipping people with data and AI literacy skills and empowering them to demand that AI-generated content is labelled as such in the media and elsewhere.

Governments should also require the disclosure - by political candidates and political campaigns - of the use of AIs and algorithmic systems so people know whether they are the subject of targeting and if they have been algorithmically selected and information pushed their way.  The Think Tank Demos have already published an Open Letter calling for a commitment to transparency on AI in Elections, but this should be formally adopted and overseen by the Electoral Commission.

It's up to anybody how they choose to vote in an election, but in the context of new and powerful technology, we should make sure it really is their own choice. It’s too late for this election, but we must ensure that we learn the lessons of 2024 to protect our democracies and societies for years to come.  

Resham Kotecha is head of policy at the Open Data Institute.

Read more about deepfake stories

Read more on IT risk management

CIO
Security
Networking
Data Center
Data Management
Close