blobbotronic - stock.adobe.com
AI outcry intensifies as EU readies regulation
Policymakers are battling to keep pace with AI developments, while experts warn of societal impact
A survey of 500 business managers conducted by Method Communications for Twilio has found that almost all of the organisations (91%) that took part in the study are using artificial intelligence (AI) to drive customer experience. The survey illustrates the sentiment of business people towards using AI to drive value in their organisations.
The survey found that four in five (81%) organisations believe recent AI technology has the potential to positively affect customer experiences.
On the flip side, US education technology Chegg company saw its share price plummet after CEO Dan Rosensweig said the company had seen a “noticeable” impact from ChatGPT.
“Since March, we saw a significant spike in student interest in ChatGPT. We now believe it’s having an impact on our new customer growth rate,” he said during the company’s Q1 2023 earnings call.
AI offers both the potential to grow the business and a significant risk by eroding a company’s unique selling point (USP). While business leaders assess its impact, there is an outcry from industry experts and researchers, which is set to influence the direction future AI regulations take.
In an interview with the New York Times discussing his decision to leave Google, prominent AI scientist Geoffory Hinton warned of the unintended consequences of the technology, saying: “It is hard to prevent bad actors from doing bad things.”
Hinton is among a number of high-profile experts voicing their concerns over the development of AI. An open letter, published by the Future of Life Institute, has over 27,000 signatories calling for a pause in the development of AI, among them Tesla and SpaceX founder, Elon Musk – who, incidentally, is a co-founder of OpenAI, the organisation behind ChatGPT.
Musk has been openly critical of advancement such as generative AI, but he is reportedly working on his own version. According to the Financial Times, Musk is bringing together a team of engineers and researchers to develop his own generative AI system and has “secured thousands of high powered GPU processors from Nvidia”.
In March, the Association for the Advancement of Artificial Intelligence (AAA) published an open letter urging the AI industry and researchers to work together to provide a balanced perspective on managing the progress of AI. The letter calls for “a constructive, collaborative, and scientific approach”, which the AAA hopes will improve understanding and support collaboration among AI stakeholders for the responsible development and fielding of AI technologies.
The open letters and experts voicing their concerns come at a time when policymakers around the world are starting to pull together regulations covering the use of AI, training data, accountability, ethics and explainability of AI-based decisions.
In January, the US Department of Commerce’s National Institute of Standards and Technology (NIST) published a risk management framework to improve trustworthiness of AI systems.
Discussing the framework, NIST deputy commerce secretary Don Graves, said: “This voluntary framework will help develop and deploy AI technologies in ways that enable the US, other nations and organisations to enhance AI trustworthiness while managing risks based on our democratic values. It should accelerate AI innovation and growth while advancing – rather than restricting or damaging – civil rights, civil liberties and equity for all.”
In the EU, members of the European parliament agreed on amendments to the EU AI Act, which specially covers AI foundation models, such as the large language model used in ChatGPT. The amendment to the act means that developers of such models now need to run tests and analysis to identify and mitigate reasonably foreseeable risks to health, safety, fundamental rights, the environment, democracy and the rule of law.
According to law firm Perkins Coie, the EU AI Act could have a significant impact on the global development and use of artificial intelligence. “Much like the General Data Protection Regulation (GDPR) did for privacy regulation, the AI Act could set a global standard followed by other countries and regions,” an article on the firm’s website stated.
In the UK, the government is going for what it describes as “a pro-innovation approach to AI regulation”. Introducing the government’s whitepaper on AI regulation in March, Michelle Donelan, secretary of state for science, innovation and technology, said: “We recognise that particular AI technologies, foundation models for example, can be applied in many different ways and this means the risks can vary hugely.
“For example, using a chatbot to produce a summary of a long article presents very different risks to using the same technology to provide medical advice. We understand the need to monitor these developments in partnership with innovators while also avoiding placing unnecessary regulatory burdens on those deploying AI.”
The pace of AI development is moving far faster than the pace with which new regulation can be introduced. Enticing as it is to deploy AI to improve business outcomes and streamline government processes, many have voiced concerns over the speed with which AI is being rolled out across society.
“I don’t think they should scale this up more until they have understood whether they can control it,” Hinton warned in his interview with the New York Times.
Read more about AI policy
- Countries worldwide are monitoring the generative systems with their own unique rules and laws. For example, China proposed new laws, and the U.S. requested public comments.
- To implement effective government regulation of technologies like AI and cloud computing, more data on the technologies’ environmental impacts is needed.