Fotimmz - Fotolia
Davos 2025: Misinformation and disinformation are most pressing risks, says World Economic Forum
World leaders, business chiefs and civil society organisations will discuss the risks posed by misinformation, disinformation and artificial intelligence at the World Economic Forum
Misinformation and disinformation pose the greatest risk to countries, businesses and individuals over the next two years.
The rise of fake news, the decline of fact checking on social media and the growth of deep fakes generated by artificial intelligence (AI) threaten to erode trust and deepen divisions between countries, the World Economic Forum (WEF) said today.
The vulnerability of governments, businesses and society to AI-generated fake narratives will be one of the key risks under discussion when business leaders, politicians, academics and non-government organisations assemble at the World Economic Forum Annual Meeting in Davos from 20 to 24 January.
The World Economic Forum’s Global risks report 2025, which draws on the views of 900 business, academic, government and civil society leaders, and over 11,000 businesses, paints a gloomy picture of countries becoming more isolated, growing risks of armed conflict, and worsening environmental problems over the next two years.
“Rising geopolitical tensions, a fracturing of global trust and the climate crisis are straining the global system like never before,” said Mirek Dušek, managing director of the World Economic Forum.
“In a world marked by deepening divides and cascading risks, global leaders have a choice: to foster collaboration and resilience, or face compounding instability. The stakes have never been higher,” he added.
With wars underway in the Middle East, Ukraine and Sudan, further armed conflict is the most pressing immediate risk in 2025.
Extreme weather events, geo-economic confrontation in the form of trade wars and tariffs, and the spread of misinformation and disinformation on social media dominate the short-term risks.
The WEF is less optimistic about the longer-term outlook, with most experts predicting more severe turbulence by 2035, driven by environmental, technology and social challenges.
More extreme weather events, shortages of natural resources, and the collapse of ecosystems and the health and ecological impacts of pollution feature in the top 10 longer-term risks, alongside the risks of AI and the growth of misinformation and disinformation.
Generative AI will drive misinformation
Generative AI has made it easier for criminal, state agencies, activists and individuals to automate disinformation campaigns, and to give them a significant reach and impact, according to the WEF.
Mirek Dušek, World Economic Forum
It is becoming increasingly difficult for people, governments and companies to identify trustworthy information as more people rely on social media and the internet for information, the forum said in a 100-page report.
The use of algorithms with hidden or undetectable biases will also exacerbate the impact of misinformation and disinformation, it said.
Training AI to review job applicants using a model trained on a pool of candidates who might not have the same gender, race or nationality, or using AI in predictive policing, could be particularly problematic.
Read more from the World Economic Forum
- Davos 2024: AI-generated disinformation and misinformation poses risks to upcoming elections in the US, the UK, Asia and South America over the next two years. Attempts to undermine the democratic process by spreading false narratives could erode confidence in governments and lead to civil unrest.
- Davos 2023: Pervasive cyber crime and cyber security gaps pose severe risk to organisations. Governments and organisations face tough trade-offs as they balance immediate problems caused by economic recession, energy shortages and rising interest rates with longer-term risks, including the impact of global warming.
- Davos 2022: Cyber risks are among the top five risks facing organisations and governments over the next two to five years. Digital inequality and the overcrowding of space with communication satellites present further challenges.
- Davos 2020: Climate change, natural disasters, extreme weather and loss of biodiversity are the greatest risks we face. With cyber conflicts, state-sponsored hacking and internet fragmentation, doing nothing is not an option.
- Davos 2018: The cost of natural disasters is now at record levels, but the cost of cyber crime is far higher. The WEF hopes to persuade governments to work together on the problem, but with isolationist politics back in fashion, can it succeed?
“When algorithms are applied to sensitive decisions, biases in training data or assumptions made during model design can perpetuate or exacerbate inequities, further disenfranchising marginalized groups,” the WEF warned.
Without clear accountability, the use of automated algorithms makes it difficult to apportion responsibility when harmful or erroneous decisions are made.
This lack of transparency and accountability can foster mistrust and scepticism about the decisions taken by governments and businesses, it said.
Surveillance
The WEF warned that as the computing power available to governments and technology companies continues to rise there was a risk of greater surveillance on citizens by governments and businesses, posing risks to privacy.
When managed responsibly, the collection of data about citizens can provide better public services, but without effective legal safeguards in place, there is a risk data will be misused.
However, citizens are often unaware how their data is collected, used and stored, limiting their ability to make informed decisions, it said.
Supply chains are vulnerable
With geopolitical volatility likely to continue over the next two years, organisations will need to check how vulnerable their supply chains are and assess the reputational risks of buying from suppliers from countries in conflict.
Carolina Klint, chief commercial officer for Europe at Marsh McLennan and a contributor to the report, said that increasing protectionism by countries will pose profound threats to “already fragile and stretched” global supply chains.
That is likely to be exacerbated by measures to restrict the export of data between countries and a rise in malicious cyber attacks, she said.
Peter Giger, Zurich Insurance Group
“By taking proactive steps to enhance supply chain resilience and invest in robust cyber security, businesses will be better placed to navigate these challenges and position themselves for success in an increasingly complex and fractured global risk landscape,” she added.
Peter Giger, group chief risk officer of Zurich Insurance Group, said that with global warming exceeding 1.5°C for the first time in 2024, the stakes were “sky high”.
“We must focus on environmental risks – from extreme weather to biodiversity loss. Immediate action is critical to mitigate the worst impacts of climate change, and to build resilience. The costs of inaction and the lack of global cooperation are having an adverse impact,” he said.
“The biggest risk would be to sit back now and say there’s nothing we can do. It’s not too late,” he added.
The WEF’s call to action to tackle bias in AI
Organisations should use AI models that minimise bias and take steps to remove bias from data before, during and after training AI models, the World Economic Forum said today.
There is a “pressing need” to “upskill” developers, data scientists and policy makers, to ensure they keep up with latest the developments in de-biasing, the WEF said in its Global risks report 2025.
Governments, civil society and academics should collaborate to create comprehensive training programmes for AI practitioners and policy makers, its experts advised.
The WEF said there is an urgent need for public awareness campaigns to educate citizens about the risks posed by disinformation, misinformation, privacy, AI and cyber threats.
It also calls for governments and organisations to set up supervisory boards for AI and introduce human oversight into AI decision-making.
AI-generated content should be labelled through digital watermarking, and information on data practices, safety policies and the potential risks of AI models should be made publicly available.