Understanding the rise and role of Private AI
Part 1: AI hype vs fear and doubt
The leakage of proprietary IP and other data has been one of the biggest corporate fears around generative AI since the launch of ChatGPT. Some companies even banned employees from using GenAI after stories appeared in the news about sensitive documents that had been uploaded – to produce a slideshow, for example – then being retrieved by outsiders using clever prompt engineering.
Which is a big part of why there’s hardly a vendor in the area now that doesn’t have something to say or offer on the theme of Private AI. The latest is Dell, but I’ve also heard a whole range of others talk on the topic very recently, from Google Cloud, IBM and Red Hat to Box, NetApp and VMware.
The basic concept is very simple: it’s your own private and secure generative AI that you fine-tune and train with your own data, but which is not connected to, or doesn’t share data with, any of the public GenAI services. Private AI can be implemented on-prem, which is why the server and storage companies are interested as it needs plenty of both compute power and storage, or it can run in a public cloud but in a private walled garden, just like any other cloud-hosted private business app.
AI-as-a-Service
It can also run in a hybrid form, where a SaaS company such as Box, Dropbox or Salesforce operates the AI infrastructure. The SaaS provider adds your business data for context and training, relying on its security, trust and data governance capabilities to ensure that only you have access to your private data – and to anything derived from it.
Where you choose to operate your Private AI will depend to a large extent on where your data already is and who you trust with it. As people are fond of pointing out, “data has gravity” and it takes work to move it, so if it’s on-prem, say, then it makes sense to run the AI there too.
On the other hand, if you already store it with one of the public cloud providers or a SaaS company, and you’re happy with their security, then there’s no reason to think they can’t also host your Private AI safely.
Avoiding the dangers of over-sharing
Some of this AI privacy and security was already feasible, of course. For example, OpenAI says that if you have a paid-for Enterprise subscription, then ChatGPT won’t use its conversations with you to improve its models.
Going beyond that is a little more complicated, however. For instance, not only must data be protected against leakage to outside the organisation, but not everyone within the organisation will be authorised to see it. That’s why AI-driven search services such as those in Box AI, Dropbox Dash and Microsoft 365 Copilot are designed to be permission-aware, keeping track of who is allowed to access what.
In part 2, I’ll go on from the Why’s and Where’s to look at some of the How’s and What’s.