kirill_makarov - stock.adobe.com

IT chiefs recognise the risks of artificial intelligence bias

Artificial intelligence promises to change the way businesses operate. IT leaders are now taking bias in AI algorithms seriously

A survey of 350 US and UK-based CIOs, chief financial officers (CTOs), vice-presidents and IT managers has reported that IT decision-makers are becoming increasingly aware of artificial intelligence (AI) bias.

Nearly half (42%) of AI professionals across the US and UK say they are “very” to “extremely” concerned about AI bias, according to research from DataRobot.

DataRobot’s The State of AI bias 2019 study found that most organisations (71%) currently rely on AI to execute up to 19 business functions.

“More organisations are deploying AI as they recognise the technology as a critical success factor for competing in today’s business climate,” said Ted Kwartler, vice-president of trusted AI at DataRobot.

Almost a fifth of the IT decision-makers surveyed said that AI is used to complete 20 to 49 business functions, and 10% said they use AI to complete more than 50 business functions.

DataRobot’s research found that AI is used by organisations to execute functions across departments, including operations (76%), finance and accounting (54%), marketing (49%), sales (47%), and human resources (35%).

“We’ve observed that AI maturity varies widely, with many organisations still using untrustworthy AI systems,” Kwartler added.

The biggest AI bias concerns for the IT executives surveyed include “compromised brand reputation” and “loss of customer trust”.

Colin Priest, vice-president of AI Strategy at DataRobot, said: “While many organisations have started to take the right steps to mitigate AI bias – such as moving away from black box systems and establishing internal AI guidelines – there’s more to be done to win the trust of businesses and consumers.

“Every business must make AI bias education a priority so they can implement critical strategies in their AI systems that will help prevent it from happening.”

Respondents said they faced challenges in developing unbiased AI algorithms, determining what data to train AI models on, and understanding how input data relates to AI decisions.

The survey found that 65% of IT decision-makers are using tools to explore why AI makes negative decisions. Almost two-thirds said they are also using tools to check which input data has the greatest effect on an AI decision. Just over half said they use word clouds to look at how text input is associated with AI decisions.

The survey reported that 94% of US IT leaders and 86% of UK IT leaders plan to invest more in AI bias prevention initiatives in the next 12 months. 

To enhance AI bias prevention efforts moving forward, 59% of the IT decision-makers surveyed said they plan to invest in more sophisticated white box systems, where AI decisions are explainable. More than half (54%) said they will hire internal personnel to manage AI trust, and 48% said they intend to enlist third-party firms to oversee AI trust.

The survey also reported that 85% of  IT leaders who took part in the survey believe that AI regulation would be helpful for better defining what constitutes AI bias and how it should be prevented.

Read more about AI bias

  • When implementing AI, it’s important to focus on the quality of training data and model transparency in order to avoid potentially damaging bias in models.
  • As AI penetrates all kinds of software, it’s imperative that testers devise a plan to verify predictive results. Put QA professionals on task to root out AI bias.

Read more on Artificial intelligence, automation and robotics