NicoElNino - Fotolia
AI in the Netherlands: Talent, data and responsibility
Applied researchers in the Netherlands are working to bring AI to a higher level
“Netherlands Organisation for Applied Science Research [TNO] is the innovation engine in the Netherlands,” says Peter Werkhoven, chief scientist at TNO and professor at Utrecht University. “We turn great science into great applications. We were established by law in 1932, and we solve societal and economic challenges through innovation. Our role in the value chain is to spot promising academic research and bring it to a level where it can be used by industry and government.”
A case in point: TNO was part of a small consortium that developed a national artificial intelligence (AI) strategy in the Netherlands which started around six years ago. The goal was to get the country on track to benefit from AI. “Like many new technologies, AI falls into an innovation paradox,” says Werkhoven. “Universities have a lot of great knowledge, but industry cannot turn that knowledge into value without help. This is where we come in with applied research.”
The consortium developed a plan to fill three gaps in the Netherlands. The first two were gaps that concerned industrial players directly. These were a shortage of AI talent and a shortage of data to train AI. The third gap was something that concerned governments – right up to the EU level: AI needed to be applied more responsibly. “We wrote the plan, and it got funded,” says Werkhoven.
As part of the process of implementing that strategy, TNO does more than just connect universities, industry and government. It also brings technology to a level where it can be applied. One type of technology it develops is called hybrid AI, which combines machine learning with machine reasoning. Hybrid AI uses symbolic reasoning in addition to deep learning. This means it learns patterns, like how current deep-learning AI does, but also reasons and can explain why it does what it does.
“Autonomous cars and autonomous weapon systems won’t come to their full potential until they include human ethical values in their decision-making – and until they can explain what they’re doing,” says Werkhoven. “First, we have to give AI both goals and a sufficient set of human values. Second, we want the machines to explain themselves. We need to close the accountability gap and the responsibility gap. This is why we need hybrid AI.”
TNO, for example, works on hybrid AI for autonomous cars and mobility as a service, which can be personalised for citizens using very complex information systems. TNO also works on predictive maintenance. Meanwhile, digital twins are used to predict defects in a bridge – and its overall life expectancy – based on all the different sensor data coming from the bridge in real-time. This enables maintenance to be scheduled at the optimal time – not too late and not too early.
In the energy domain, TNO works on smart energy grids, matching supply to demand. In healthcare, it works on AI to provide personal lifestyle interventions. Many diseases are related to lifestyle and can be cured or prevented by helping people adjust it. The systems cannot suggest the same thing for every person. Advice has to be personalised, based on a combination of lifestyle and health data. AI recognises patterns in combined data sets, while at the same time protecting data privacy by using secure data-sharing technology.
Morality and explainability
The work in the Netherlands shines a light on two of the most pressing issues around AI. They are not just a matter of technology, but rather a question of morality and explainability. In the healthcare sector, many experiments use AI to diagnose and advise on treatments. But AI does not yet understand ethical considerations and it cannot explain itself.
These two things are also critical in domains outside of healthcare, including self-driving vehicles and autonomous weapons systems. “If we want to see the full potential of AI in all these application domains, these issues have to be solved,” says Werkhoven. “AI applications should mirror the values of society.”
The first question is how can human moral values be expressed in ways that can be interpreted by machines? That is, how can we build mathematical models that reflect the morals of society? But the second big question is at least as important – and perhaps even more elusive: what exactly are our values as a society? People don’t exactly agree on the answer to this second question.
“In the medical world there is more agreement than in in other domains,” says Werkhoven. “They have elaborated moral and ethical frameworks. During the pandemic, we were close to applying those values when the maximum care capacity was reached. These moral frameworks must represent the moral values of society with respect to a given situation.”
Read more about TNO
- Dutch research institute TNO, in collaboration with various partners, has developed self-healing security software.
- Netherlands leads the way in the development of extended reality technology, which could prove its value in a crisis such as Covid-19.
- Dutch research organisation is looking into areas where self-sovereign identity technology could be used in society and business.
Beyond morality is explainability. AI systems go beyond traditional rules-based programming, where coders use programming constructs to build decisions into programs. Anybody who wants to know why a traditional application made a certain decision can look at the source code and find out. While it might be complicated, the answer is in the program.
By contrast, the neural network type of AI learns from large data sets, which must be carefully curated, so the algorithm learns the right things. Neural networks generated during the learning phase are then deployed for use in the field, where they make decisions based on patterns in the learning data. It’s virtually impossible to look at a neural network and figure out how it made a given decision.
Explainable AI aims to close this gap, providing ways for algorithms to explain their decision-making. One big challenge is to develop a way of communicating the explanation so that humans can understand it. AI might come up with reasons that are logically correct but too complicated for humans to understand.
“We now have AIs like Chat GPT that can explain things to us and give us advice,” says Werkhoven. “If we no longer understand its explanations but still take the advice, we may be entering a new stage of human evolution. We may start designing our environments without having the slightest idea why and how.”