CW+ Premium Content/MicroScope
Access your Pro+ Content below.
Generative AI: Data privacy, backup and compliance
This article is part of the MicroScope issue of December 2023
Generative or conversational artificial intelligence (AI) tools have attracted a lot of attention, as well as some controversy, as applications such as OpenAI’s ChatGPT and Google’s Bard create human-like responses to queries or prompts. These apps draw on large databases of content and raise questions around intellectual property, privacy and security. In this article, we look at how chatbots work, the risks posed to data privacy and compliance, and where generated content stands with regards to backup. These tools – more accurately termed “generative AI” – draw on large language models to create human-like responses (see box). OpenAI’s large language model is the Generative Pre-trained Transformer (or GPT); Google Bard uses Language Model for Dialogue Applications (LaMDA). However, the rapid growth of these services has caused concern among IT professionals. According to Mathieu Gorge, founder of VigiTrust, in a recent research project, all 15 chief information security officers he interviewed mentioned generative AI as a ...
Features in this issue
-
Industry 5.0: What is it and what does its future hold?
As Industry 5.0 becomes a more popular topic of conversation in the business tech world, critics have come out to question its legitimacy as a revolution instead of a progression
-
Generative AI: Data privacy, backup and compliance
We look at generative AI and the risks it poses to data privacy for the enterprise, implications for backup, and potentially dangerous impacts on compliance