vchalup - stock.adobe.com

Netherlands wants watchdog to reduce bias in artificial intelligence

Dutch government will take swift action to prevent citizens getting into trouble due to the misuse of algorithms

This article can also be found in the Premium Editorial Download: CW EMEA: CW EMEA: Kyiv is ready for anything

The Dutch child benefits scandal showed the devastation that algorithms can cause, which is why the Netherlands government wants to ensure that citizens can keep control of their digital lives and trust the digital world.  

Alexandra van Huffelen, state secretary for digitisation, has written to the House of Representatives that it is essential to have algorithms on the market that respect human rights. According to van Huffelen, this applies not only to the algorithms of companies, but also to the applications of government, which, in her view, has an exemplary role.  

Pascal Wiggers, an associate professor in responsible AI at the Amsterdam University of Applied Sciences, deals with the question of how to design ethically and socially responsible AI. He researches how to make AI responsible by examining algorithms, by developing responsible AI solutions and by giving developers tools to build responsible algorithms.  

“The benefits affair has made it painfully clear that AI can have problems,” said Wiggers. “These were always there, but they were invisible because AI is a black box. Increasingly, we see concrete examples of bias in algorithms, such as the example of Amazon where, in the job application process, women’s letters were rejected by default by the AI algorithm because the system had been fed mostly data on male candidates.”

But it is no surprise that current AI systems contain biases, said Wiggers. “Such a system learns from the data it is fed. And in that data, all the biases we as a society contain pass by, so the system becomes a reflection of who we are. That makes it difficult to get it right. Also, because AI does not consist of handwritten code but is self-learning, it is difficult to figure out how and why a system learns what it learns.”

To achieve responsible deployment of algorithms, European policies and legislation (in the making) play an essential role. But the Dutch government is going one step further by setting up an algorithm register, which will be worked out in close cooperation with governments and experts in the coming years.  

Van Huffelen wants to appoint a regulator, and wrote to the House of Representatives: “Algorithms should be tested beforehand and repeatedly examined during use for data quality, proportionality, and possible bias.” It is still being determined whether this will be a separate regulator or whether responsibility for monitoring algorithms will fall to existing watchdogs.  

Wiggers thinks the latter is more feasible. “Algorithms are not an isolated thing – they are everywhere,” he said. “So it seems logical that the Telecom Agency will monitor when it comes to, say, AI in a telecom application. And in the case of financial algorithms, that responsibility will lie with the Dutch Authority for the Financial Markets.”

As for the algorithm register, the Netherlands government can already learn from the municipality of Amsterdam, which established the first such register two years ago. The algorithm register is an overview of the algorithms the city of Amsterdam uses in municipal services and also shows how they are used. For example, the city uses algorithms in reports of litter in public spaces, parking controls, and detecting illegal holiday rentals.  

“That is a form of transparency,” said Wiggers. “But the challenge of such a register turns out to be that it is tough to write it down properly so that everyone can read it and do something with that information. Talking about AI at an expert level is different from talking about it to a wide audience. Also, as a citizen, you want an action perspective – to know what to do when you see something you disagree with.”

Read more about AI in the Netherlands

The simple fact that the register is there is already a step in the right direction, said Wiggers, but it is also a barrier because a user has to actively go to the register to check an algorithm. “Ideally, you would like to move towards some kind of logo on a page where an algorithm is used, which you can click on for more information,” he said. “But that is still difficult to achieve, especially because many algorithms are used behind the scenes. Such a register is a tool that certainly helps, but always in the palette with other measures.”

According to Van Huffelen, algorithms should only be used when it is necessary to make government work properly. She wrote that new European AI legislation should ensure that algorithms and their applications are fair and transparent, so that citizens and businesses can be sure of proper treatment.  

But this is not easy to achieve, said Wiggers. “It all sounds very nice, but AI is difficult to measure,” he added. “Certain things, like data management and cyber security, are fairly straightforward, making them relatively easy to monitor. But European law also requires ‘human oversight’ and transparency. The question then is, what exactly does this mean, how far does it go, and how do you shape it? Van Huffelen also wants algorithms to be constantly monitored, and they should, as the behaviour of self-learning, data-driven algorithms changes over time. But how?”

That is where Wiggers’ research comes in. “We are trying to find ways to apply concepts like transparency and develop tools to monitor algorithms,” he said. “We are collaborating with the University of Amsterdam and the CWI in an ELSA Lab on AI, media and democracy, among others. For instance, we are investigating whether we can find a potential bias in automatic speech recognition. In other words, does the AI system understand all dialects equally well?

“But we are also focusing on the interface side of AI. Especially when it comes to human oversight, we want to identify what information someone needs to see to draw conclusions. Of course, AI is often seen as an advisory system where a human makes the final decision, but how do you make sure that when the system has given the right advice 10 times, the human decision-maker continues to watch even on the 11th time critically?”

So not only is there a need for tools to measure AI’s performance, but it is also essential for developers to understand how algorithms work and learn. Wiggers added: “A lot is still unclear, but I notice that this topic – responsible AI – is very much alive in the Netherlands. This is not surprising because, as an organisation, you don’t want to be in the news in a negative way because your company’s algorithm turns out to discriminate.

“But because there is still a need for tooling to monitor AI systems, it is difficult to report exactly what the system is doing and how it works. This is something that we, as scientists, industry and government, need to learn together. How do we make our technology accountable?”

Wiggers stressed that bad publicity is a reason to build more responsible AI systems and ultimately makes for better technology. He draws a comparison with the General Data Protection Regulation (GDPR).  

“There too, in the beginning it was not entirely clear how and what, but slowly it was fleshed out, and now it is clear to almost everyone what you must comply with,” he said. “The same applies to AI – perhaps to an even greater extent. There is a responsibility for developers to think more carefully about the algorithms they build and how they are trained.

“It shouldn’t become a checklist, but people should actually want to build better systems because, at the end of the day, everyone will benefit.”

Read more on Artificial intelligence, automation and robotics