Boardrooms losing control in generative AI takeover, says Kaspersky
C-suite executives are increasingly fretful about what they perceive as a ‘silent infiltration’ of generative AI tools across their organisations
Over 90% of senior business leaders interviewed for a study on the use of generative artificial intelligence (GenAI) in the enterprise believe that such tools are being regularly used by their employees, with 53% saying generative AI is now “driving” certain lines of business, and 59% expressing deep concerns that the extent of this “silent infiltration” is elevating their cyber risk levels.
This is according to cyber security supplier Kaspersky, which further warned that just 22% of leaders have even discussed laying down internal governance policies to monitor the use of generative AI, even as 91% admitted they needed more understanding of how such tools are being used to protect against security risks.
The theme of people adopting GenAI within their workplaces without oversight from IT and security teams or leadership, a trend we might reasonably term shadow AI, is not a new one as such. Earlier this year, an Imperva report drew similar concerns, stating that an insider breach at a large organisation arising from someone using generative AI in an off-the-books capacity was only a matter of time.
However, given the steadily widening scope and ever-growing capability of generative AI tools, organisations can no longer afford not to exert, at the very least, minimal oversight.
“Much like bring-your-own-device [BYOD], gen AI offers massive productivity benefits to businesses, but while our findings reveal that boardroom executives are clearly acknowledging its presence in their organisations, the extent of its use and purpose are shrouded in mystery,” said Kaspersky principal security researcher, David Emm.
“Given that GenAI’s rapid evolution is currently showing no signs of abating, the longer these applications operate unchecked, the harder they will become to control and secure across major business functions such as HR, finance, marketing or even IT,” said Emm.
To function effectively, generative AI relies on continuous learning through data inputs, and that in every instance of unsanctioned usage, when an employee inputs any data into a generative AI tool, they are transmitting it outside the organisation and may very well be causing a data breach, even if acting in good faith.
Therefore, said Emm, boardroom concerns about data loss are very real, and the research data reflected this, with 59% of leaders expressing “serious apprehension” over this risk factor.
In spite of these concerns, the study also revealed that 50% of business leaders plan to harness generative AI in some capacity, likely to automate some of their workforce’s more repetitive tasks, and 44% planned to integrate generative AI tools into their daily routines. Interestingly, 24% also said that the functions they felt most inclined to use generative AI to automate were IT and security.
“One might assume that the prospect of sensitive data loss and losing control of critical business units might give the C-suite pause for thought, but our findings reveal that almost a quarter of industry bosses are currently considering the delegation of some of their most important functions to AI,” said Emm.
“Before this happens, it is imperative that a comprehensive understanding of data management and the implementation of robust policies precede any further integration of GenAI into the corporate environment.”
Speaking on the issue in the wake of a speech in which prime minister Rishi Sunak called for the world to take the risks arising from generative AI more seriously, and ahead of the AI Safety Summit at Bletchley Park, Fabien Rech, senior vice-president and general manager at Trellix, commented: “Generative AI is a double-edged sword – as the cyber security landscape continues to evolve, the proliferation of generative AI only adds further complexity to the mix.
“With the first AI Safety Summit launching next week, its vital for organisations to be aware what this will mean for the future of regulation with this emerging technology, and how businesses can be expected to utilise and integrate it
“With the ease of access to generative AI tools like chatbots and image generators, the simplification of day-to-day tasks and activities is a definite benefit to overall productivity and efficiency. However, there are understandable concerns around how these can be used to improve the sophistication of malicious code injection, phishing and social engineering, and even the use of deepfake technology to bypass data privacy regulations.
Added Rech: “It’s vital for organisations to make sure their security hygiene is robust. By integrating the right solutions and technology, security teams can build a more resilient protection environment with adaptive technology that flexes to meet threats head on. Taking this approach allows organisations to regain confidence, giving them the upper hand while protecting the business from cyber criminals looking to leverage generative AI.”
Read more about generative AI risks
- The growth of generative AI poses risks and opportunities for IT and business leaders, says Gartner, and CIOs need to prepare for disruption.
- Despite its benefits, generative AI poses numerous, and potentially costly, security challenges for companies. Review possible threats and best practices to mitigate risks.
- Longtime trust and safety leader Tom Siegel offers an insider's view on moderating AI-generated content, the limits of self-regulation and concrete steps to curb emerging risks.