adiruch na chiangmai - stock.ado

How AI ethics is coming to the fore with generative AI

The hype around ChatGPT and other large language models is driving more interest in AI and putting ethical considerations surrounding their use to the fore

This article can also be found in the Premium Editorial Download: CW Asia-Pacific: CW APAC: Trend Watch – Artificial intelligence

The hype around generative artificial intelligence (AI) models such as ChatGPT and other large language models has put ethical considerations surrounding their use to the fore, such as copyright issues, reducing harm from misinformation and potential biases in AI models.

While some of those considerations are already being addressed by big tech firms and industry regulators through ethical guidelines, the fact that these large language models are spurring more interest in AI demands organisations to take AI ethics more seriously.

According to a global poll of 2,500 executives by Gartner, 45% of respondents noted that the publicity around ChatGPT has prompted them to increase AI investments. Some 70% said their organisation is investigating and exploring the use of generative AI, while 19% are in already in pilot or production mode.

But what’s more worrying is that out of those polled just 5% are very concerned about the risks of generative AI, while those that have started pilots are unlikely to be fully aware of what to expect, says Svetlana Sicular, vice-president analyst at Gartner.

“The current GPT hype has introduced the idea of AI, but not necessarily generative AI, to a new audience who are not familiar with AI and AI ethics,” Sicular says. “However, we have also seen in some of our earliest surveys that 40% of organisations are looking at responsible AI, which is wider than AI ethics.”

The discussion of AI ethics often starts with a set of principles guiding the moral use of AI, which is then applied in responsible AI practices. The most common ethical principles include being human-centric and socially beneficial, being fair, offering explainability and transparency, being secure and safe, and showing accountability.

Such principles will help organisations resolve ethical questions, such as whether or not to use an AI model that could predict the onset of breast cancer five years ahead of time, subjecting people to unnecessary procedures even if they may not develop the disease later on.

“But it’s still about saving lives and while the model may not detect everything, especially the early stages of breast cancer, it’s a very important question,” Sicular says. “And because of its predictive nature, you will not have everyone answering the question in the same fashion. That makes it challenging because there’s no right or wrong answer.”

With generative AI, you will never be able to explain 10 trillion parameters, even if you have a perfectly transparent model. It's a matter of AI governance and policy to decide what should be explainable or interpretable in critical paths
Svetlana Sicular, Gartner

Tech giants such as Salesforce have taken the challenge seriously. The supplier of cloud-based software used by sales, marketing and customer service teams incorporates AI into its tools, and has developed an AI ethics model that incorporates common ethical principles.

But as it started to work with generative AI, which increased the risks for customers, it found that it had to be more specific with its principles and created a set of guidelines for enterprise use cases, according to Kathy Baxter, principal architect of ethical AI practice at Salesforce.

One of the guidelines, says Baxter, is the accuracy of AI models. “This isn’t a consumer use case where you’re asking ChatGPT to write a birthday poem where if it gets it wrong, there’s no real harm or risk there.

“But if you are using generative AI to recommend steps for a field service agent to troubleshoot or fix a piece of hardware or machinery and if it leaves out a step, someone could genuinely get hurt,” she says.

In such cases, accuracy is critical, and the user has to know, with certainty, that the answer from a generative AI model is clear. “We have to be able to say that this is the source of truth and that you don’t try to search for what you learned from the Web years ago and look for the answer there,” Baxter says.

Another guideline is related to safety, where there is a need to guard against toxicity and other harmful content, as well as protecting personally identifiable information. “We’ve seen that generative AI models can leak training data, so private photos or other content that's very sensitive should never have been generated by the model.”

Salesforce is also transparent when its AI capabilities are autonomously delivered. It makes it clear whenever a user is engaging with a chatbot and not a live human agent, Baxter says, adding that while “generative AI will get better in doing things on its own, for now, this is really about supercharging human abilities”.

One common ethical AI principle is human centricity which puts people in charge even if AI systems make automated decisions. But at the same time, the whole point of AI is that it scales well.

Still, having people in charge does not mean every single decision has to be validated by a human being, according to a Gartner report on AI ethics. Instead, there must be an “override possibility for decisions, ensuring human decision autonomy”.

On how AI models can scale with human involvement, Sicular notes that the models can still scale at some point, but not from the get-go. “We recommend using generative capabilities as a first draft and the human has to be always in the loop at this point,” she says. “What you’re scaling will probably be for less critical use cases, such as running personalised ads, even if the ads are a little off.”

Lee Joon Seong, applied intelligence lead in Southeast Asia at Accenture, points out the nuances of human involvement in AI decision-making. He notes that humans can be “out of the loop” for use cases with low impact, but they need to be “in the loop” or “on the loop” for tasks like underwriting where tolerance for errors is low.

For AI models that make automated decisions, their creators have sought to provide more transparency on how the decisions are derived. This is increasingly difficult to do with large language models, which have proved to be capable of hallucinating incorrect facts in some cases.

How do you ensure there’s no misinformation or intellectual property infringement? That will require the companies responsible for those models to take responsibility. Can you trust them to have good practices, or do you need to have regulation? That's the debate that's happening right now
Lee Joon Seong, Accenture

“With generative AI, you will never be able to explain 10 trillion parameters, even if you have a perfectly transparent model,” Sicular says. “It’s a matter of AI governance and policy to decide what should be explainable or interpretable in critical paths. It’s not about generative AI per se; it's always been a question for the AI world and a long-standing problem.”

Lee says while industry experts are exploring ways to improve the explainability of large language models, there are other approaches to provide transparency by looking at the inputs and outputs of a model.

He added: “The material risk assessment is also an important element of this because if the impact is immaterial, then you may be more relaxed about not understanding how a decision is made, if it has lower financial or reputational impact, for example.”

One of the challenges of AI is that biases can be propagated at scale. These biases can be introduced by the creators of AI models, by either hardcoding personal preferences into algorithms or using incomplete datasets for model training, says John Yang, vice-president for Asia-Pacific and Japan at Progress.

A recent study by Progress found that nearly two-thirds of respondents in Australia are concerned about being exposed to data bias when using analytics or AI tools, including biased datasets, as well as model, algorithm or training bias, and the unconscious bias of people.

Despite those concerns, Yang says there’s still a lack of understanding of how biases can be introduced, calling for organisations to take a people, process and technology approach to address the issue.

“The people element is very key as many AI and computer scientists are male,” Yang says, noting that data bias is not a concern for most of them. “You can almost believe that they all come from similar backgrounds and have their own personal preferences embedded in their algorithms.”

“So, if an organisation wants to be serious about addressing data bias, it needs to bring a more diverse group of people together, train them on dataset selections and management, and set up a process to do regular reviews to make sure the issue is top of mind."

To mitigate data biases, organisations can use data management tools to track the provenance of datasets and manage the lifecycle of data, which could be from social media, geographic information system or transactions, Yang says. “You can manage and harmonise those datasets, so that you can view them from various angles, including whether they are biased or unbiased.”

Notwithstanding the ethical challenges of AI, nearly all organisations in Southeast Asia believe they are in a new era of AI technologies and that they need to do something about it or be disrupted, according to a recent study by Accenture.

They also recognise the risks as well, with companies, including Accenture, disallowing the blanket use of ChatGPT in their work. “While they’re excited about the technology, they are also trying to understand what it is about and are evaluating the risks and potential use cases at the same time,” Lee says.

Meanwhile, policymakers in the European Union are drafting regulations around the use of AI, with Italy’s privacy and data protection regulator going as far as banning the use of OpenAI’s ChatGPT in early April 2023 over privacy concerns before reversing the decision later in the month.

But whether other jurisdictions will follow suit remains to be seen. “OpenAI is not the only company that does large language models,” Lee says. “How do you ensure there’s no misinformation or intellectual property infringement? That will require the companies responsible for those models to take responsibility. Can you trust them to have good practices, or do you need to have regulation? That’s the debate that’s happening right now.”

Read more about AI in APAC

  • Melbourne-based Cortical Labs’ lab grown neurons could speed up AI training in a more energy efficient way and its work has caught the eye of hyperscalers and Amazon’s CTO.
  • India’s Cropin, one of the first movers in agriculture technology, has built an industry cloud platform with AI capabilities that is now used by the likes of PepsiCo to maximise crop yields.
  • SKT plans to broaden the use of AI across its business, from delivering AI-powered services to improving customer experience using generative AI models.
  • An AI engine developed by Singapore startup EntoVerse is helping cricket farmers improve yield by optimising environmental and other conditions.

Next Steps

Top 10 generative AI courses and training resources

Read more on Artificial intelligence, automation and robotics