Getty Images

Fundamental guardrails to address AI risk and value

Leveraging existing governance approaches and current success will make AI governance less daunting while supporting business goals

The rapid adoption of generative AI has generated much debate about AI’s potential effects on society. Organisations must balance the technology’s significant potential with the new risks it poses, particularly misuse.

It's important for organisations to put in place appropriate guardrails to leverage the technology’s potential, while also addressing its challenges. This can be achieved by putting a flexible governance framework in place that is tailored to AI’s unique features, while ensuring the technology is used safely and responsibly.

But what exactly is AI governance and why should organisations take it seriously?

Although seemingly a paradox, good governance actually enables better innovation. It provides the constraints and guardrails that give organisations the ability to explore questions about AI’s value and risks, as well as the space within which to innovate and start to produce results.

Scaling AI without governance is ineffective and dangerous. Society expects organisations to operate with transparency, accountability and ethics. AI governance is therefore necessary to meet these societal demands, while supporting progress by dealing with complexity, ambiguity and rapid technological evolution.

As well as considering broader societal impacts and regulatory compliance, organisations need to balance the competing requirements of trust and control in the workplace with business value, corporate risk and the privacy of individual employees, customers and citizens.

For example, AI governance must provide bias mitigation guidelines and validation requirements, with consideration of cultural differences and regulations protecting the rights of individuals and groups. Bias can negatively affect the level of acceptance of AI in organisations and in society as a whole.

For multinational organisations, bias is a fiendish problem because cultural norms and associated regulations, such as consumer protection laws, vary from one country to another. 

It’s important for organisations to identify people to manage any organisational, societal, customer and employee facing considerations. These people should represent diverse ways of thinking, backgrounds and roles. Then differentiate governance decisions and decision rights by leveraging their expertise and perspectives.

Decision rights establish authority and accountability for business, technology and ethical decisions. They should be concentrated on the most critical AI content, which should be governed aggressively.

On the other hand, organisations can allow greater autonomy in decision rights for non-critical AI content, but employees that use AI assistance must be aware that they are accountable for the resulting outcomes.

Addressing AI complexity with governance

AI encompasses a continually evolving technological landscape and this complexity – together with the ambiguity intrinsic to AI’s nature – leads to a lack of clarity around its reputational, business and societal impacts.

Governance should reflect AI’s cross-functional and predictive characteristics. A mistake many organisations make is having AI governance as a standalone initiative. It should instead be an extension of the measures that are already in place in the organisation.

Taking advantage of existing governance approaches and reusing the current successes achieved makes the task of managing AI impacts less daunting and more easily understood. While many approaches apply to AI, including data classification, standards and communication practices, there are unique characteristics – trust, transparency and diversity. How they apply to people, data and techniques is important.

Critical AI-related decisions in many organisations are made by an AI council. This is generally chaired by the CIO or CDAO, as well as involving a working group with representatives from across the business. This diverse group of stakeholders needs to have direct working relationships with other governance groups to spearhead AI governance efforts.

One of the council’s first efforts is ensuring compliance with relevant regulations. While privacy is the top and most visible concern, there will also be all kinds of legal and industry-specific requirements that have to be met.

AI governance starts with the outcomes that support business goals. The goal for an AI pilot or proof of concept should be to prove the value as defined and confirmed by the council jointly with other business stakeholders, not on measurements like accuracy or to compare technical tools.

For AI-advanced organisations, this includes governance of the entire AI lifecycle with the goal to enable reusability of AI components and accelerate delivery and scale AI across the enterprise.

Svetlana Sicular is vice-president analyst at Gartner

Read more on IT governance

CIO
Security
Networking
Data Center
Data Management
Close