pathdoc - stock.adobe.com
Generative AI is everywhere – but policy is still missing
Use of advanced artificial intelligence is outpacing organisations’ ability to govern the technology, warns ISACA research
Generative artificial intelligence (GenAI) is popping up everywhere in the workplace, but companies do not have the policies or training in place to make sure deployments do not go awry.
Staff at nearly three-quarters of European organisations already use AI at work, but only 17% have a formal, comprehensive policy governing the use of such technologies, according to research by tech professional body ISACA.
Around half (45%) of those surveyed by ISACA said the use of GenAI was permitted in their organisation, which was up significantly from 29% just six months ago.
However, it seems staff are pushing adoption further than the bosses may realise, with 62% of those surveyed using GenAI to create written content, increase productivity and automate repetitive tasks.
Lack of AI understanding
The ISACA survey found that while 30% of organisations provided some limited training on AI to employees in tech-related positions, 40% offered no training at all.
Despite the enormous amounts of hype around generative AI, most business and IT professionals have limited awareness of the technology, the survey found, with three-quarters (74%) reporting they were only somewhat familiar with it or not very familiar at all. Only 24% said they were extremely or very familiar with AI, while 37% described themselves as beginners.
This does not appear to stop staff worrying about the potential negative impact of AI, however, with 61% of respondents admitting to being extremely or very worried that generative AI could be exploited by bad actors.
A quarter (25%) felt that organisations were giving enough attention to ethical AI standards, but only 23% believed organisations were properly addressing concerns around AI, such as data privacy and the risk of bias.
A significant majority (89%) identified misinformation and disinformation as the biggest risks of AI, but a mere 21% were confident in their own, or their company’s, ability to spot it.
While 38% of the workers surveyed expected many jobs to be eliminated by AI over the next five years, many more (79%) said jobs would be modified by it.
Digital trust professionals were more optimistic about their own fields, with 82% claiming AI would have a neutral or even positive impact on their own career. However, they acknowledged that new skills would be needed to succeed, with 86% expecting to have to increase their skills and knowledge in AI within two years to advance or retain their job.
For this study, ISACA surveyed 601 business and IT professionals in Europe. The results were in line with a larger international survey it also conducted.
Chris Dimitriadis, chief global strategy officer at ISACA, said there was a lot of work to be done when it came to understanding AI in the workplace. “You can’t create value if you don’t deeply understand the technology. You can’t really address the risk if you don’t understand the technology,” he said.
Dimitriadis added the current situation with AI was the same as with previous emerging technologies. “We have organisations trying to figure it out, to create their policies and put together teams to create the skills,” he said.
“But at the same time, the adoption of this technology doesn’t really wait for all of those things to happen. So, we see employees using generative AI to create written content [and] product teams trying to test out new partnerships with AI providers without this being put in a framework that can help the organisation in a meaningful way,” he added.
Dimitriadis warned that while companies were keen to seize the promise of AI and create value, many were not yet taking training seriously or reskilling their personnel to use AI in a safe manner. “Our thirst to innovate, and to create something new, sometimes outpaces the corporate policy structures,” he told Computer Weekly.
Within a larger organisation, for example, some individual departments might start using AI without letting senior management know about this new technology. In other cases, time-to-market pressures can leave cyber security, assurance and privacy efforts lagging behind, he said.
Education on AI risks needed
But the biggest reason for the gap between AI usage and AI governance was deemed to be the lack of skills. “We already have a huge gap in cyber security, for example. Imagine now with AI how this gap is transformed into something much more serious,” he said.
That’s because GenAI, in particular, can throw up novel security risks, depending on the industry and the application. Companies processing personal data need to be aware of the risk of AI introducing bias, for example, or even hallucinating new details to add to records.
There is also the threat of attacks from outsiders, perhaps crafting requests that could trick AI systems into revealing sensitive data. Another challenge is that while specific regulations around AI are emerging in some jurisdictions, it is hard for non-experts to see what rules their use of AI could be breaking.
This is why organisations have to be very cautious to audit AI continuously in terms of its results, to make sure it operates according to the expectations of the user, he said.
The first step towards fixing the lack of policies and oversight of AI, according to Dimitriadis, is to train the right people within the organisation.
“It always starts with people. If you have trained people, then these people can put the right policies in place. You first need to understand the risk and then write the policy,” he said.
It’s also about making sure this awareness and education around AI goes beyond the experts, so employees are aware of the risks around the use of AI and can avoid unintentional breaches, he said.
“This is a boardroom discussion, in terms of making sure the value that is to be generated by the adoption of AI is also in balance with the risks,” he added.
Part of that will be having the chief information security officer (CISO) on hand, along with privacy and risk experts trained and able to talk to boards about the challenge in the context of the operation of a particular organisation.
“If you go to the board with a generic story about threats, this will never be convincing because it will not be put in the context of a specific revenue stream within the company,” he said. “It all starts with educating people.”
Read more about generative AI
- Councils have been doing more with less for over a decade. GenAI might just buy them a little breathing space.
- Turing Institute researchers say there is a ‘huge’ opportunity for artificial intelligence to automate many government services.