Getty Images/iStockphoto
Singapore releases Asia’s first AI governance framework
The model framework will help businesses in Singapore tackle the ethical and governance challenges of AI implementations
The Singapore government has released an artificial intelligence (AI) governance framework to help businesses tackle the ethical and governance challenges arising from the growing use of AI across industries.
The model framework, announced at the World Economic Forum (WEF) in Davos, Switzerland, builds on earlier guidelines detailed in a discussion paper released in June 2018 by Singapore’s Personal Data Protection Commission (PDPC) and Infocomm Media Development Authority (IMDA).
Underpinning the framework are two high-level guiding principles – AI implementations should be human-centric, and decisions made or assisted by AI should be explainable, transparent and fair to consumers.
These principles, IMDA said, will enhance trust in and understanding of AI, as well as acceptance of how AI-related decisions are made for the benefit of users.
As an example, a retail store that plans to use AI to recommend food products to consumers can use the framework to gauge the implications of a wrong or harmful recommendation, and decide if and how people should be involved in the decision-making process.
Speaking at the WEF, Singapore’s minister for communications and information, S. Iswaran, said Singapore has always had “a certain mindshare” in terms of its ability to contribute to cutting-edge development and governance of policies, and how it is able to think into the future in collaboration with the industry.
“By announcing the model framework in Davos, we have the opportunity to both underscore Singapore’s continued role in that context, as well as invite global feedback on what we are doing,” Iswaran said.
Asked why countries like the US and Japan might be interested in adopting Singapore’s framework, Iswaran noted that Singapore is a small, open and pro-business economy that supports a rules-based trading and economic environment.
“Therefore, when we propose some of these ideas, they tend to be seen in that context,” Iswaran said, adding that Singapore is also more objective compared with some other jurisdictions.
Noting that the framework was developed in consultation with the industry, Iswaran said private companies should find that most – if not all – of it are well within the bounds of what they would have thought were important focus areas.
As such, Iswaran said it is unlikely that industry players will be deterred from doing things out of Singapore because of the framework.
“On the contrary, I would say that we might even be able to come out as one of the jurisdictions with sound approaches to data management and the governance of AI, and other frontier technologies. In this case, more companies would want to be associated with us,” he added.
Moving forward, the IMDA and the WEF will be engaging organisations to discuss the model framework in greater detail and facilitate its adoption.
Work has already commenced, with closed-door discussions led by the WEF’s Centre for the Fourth Industrial Revolution (C4IR) and IMDA to seek feedback on the framework, as well as to generate and understand use cases and practical examples.
The C4IR and IMDA will also develop a measurement matrix for the framework which regulators and certification bodies globally can adopt and adapt for their use in assessing whether organisations are responsibly deploying AI.
Read more about AI ethics and governance
- The technologists building artificial intelligence algorithms should take responsibility for the technology’s impact on society, a survey finds.
- The Singapore government has set up an AI advisory council to ensure the ethical use of AI and data in the city-state.
- The announcement of high-profile members of the UK's Centre for Data Ethics and Innovation is another step for the government's commitment to be a global leader in AI.
- Explainability has been touted as a solution to the problem of biased AI models, but experts say that approach only gets you part of the way to bias-free applications.