IBM opens wider aperture on Watson OpenScale AI bias 

By goodness and by golly, hasn’t AI already thrown up its own set of buzzphrases?

Many of them have come about as a result of the bias that was an inherent part of initial systems development in this field.

Often developed by males, a bias for gender was overlooked… and then, looking wider, we quickly found that bias and sensitivity towards culture, race, religion, ethnicity and many other defining human traits had never been conscientiously ‘programmed in’ to these systems in their foundational architecture.

Today, we’re not just worried about getting our hands on working AI… we need ‘Explainable AI’ and ‘Operationalised AI’ … and perhaps just plain old ‘Governed AI’ and ‘Trusted AI’.

This is part of why IBM created Watson OpenScale.

Offering manager for Watson OpenScale Susannah Shattuck has explained that this IBM brand introduced the idea of giving business users and non-data scientists the ability to monitor their AI and machine learning models to understand performance, help detect and mitigate algorithmic bias and to get explanations for AI outputs. 

But, she says, that was just the start. 

Protected attributes

IBM Watson OpenScale has now been augmented to make it easier to detect and mitigate bias against ‘protected attributes’ like sex and ethnicity with Watson OpenScale through recommended bias monitors.

“Up till now, users manually selected which features or attributes of a model to monitor for bias in production, based on their own knowledge. With the new recommended bias monitors, Watson OpenScale will now automatically identify whether known protected attributes, including sex, ethnicity, marital status, and age, are present in a model and recommend they be monitored. Such functionality will help users avoid missing these attributes and ensure that bias against them is tracked in production,” said Shattuck. 

She also notes that the team is working with regulatory compliance experts to continue expanding this list of attributes to cover the sensitive demographic attributes most commonly referenced in data regulation.

Shattuck says that in addition to detecting protected attributes, Watson OpenScale will recommend which values within each attribute should be set as the monitored and the reference values.

Recommending, for example, that within the “Sex” attribute, the bias monitor be configured such that “Woman” and “Non-Binary” are the monitored values and “Male” is the reference value.

“Recommended bias monitors help to speed up configuration and ensure that you are checking your AI models for fairness against sensitive attributes. As regulators begin to turn a sharper eye on algorithmic bias, it is becoming more critical that organisations have a clear understanding of how their models are performing, and whether they are producing unfair outcomes for certain groups,” concluded Shattuck.

We’re on the road to bias free open attribute AI, but there’s sea of algorithmic bias out there to combat… so it’s still early days.