BirgitKorber - stock.adobe.com
Italy’s ChatGPT ban: Sober precaution or chilling overreaction?
Italy’s data protection authority issued a temporary ban on ChatGPT citing data protection concerns and alleged breaches of the GDPR. Is this a reasonable precaution, or a chilling restriction on personal freedoms?
A sudden ban on the use of ChatGPT by the Italian data protection authority has divided artificial intelligence (AI) and data privacy experts over whether officially restricting the use of the groundbreaking yet highly controversial service is a sensible and right-thinking precaution under the circumstances, or a massive overreaction with chilling implications for individuals’ freedoms.
The data protection regulator, the Garante per la Protezione dei Dati Personali (GPDP), issued its order against ChatGPT’s US-based owners, OpenAI, on Friday 31 March.
The authority accused ChatGPT of collecting data unlawfully. It claimed there was “no way” for ChatGPT to continue processing data without breaching privacy laws, and no legal basis underpinning its collection and processing of data for training purposes. The GPDP added that the information the ChatGPT bot provides is not always accurate, which implies inaccurate data is being processed.
Furthermore, the GPDP said, ChatGPT lacks an age verification mechanism, and by doing so exposes minors to receiving responses that are age and awareness-appropriate, even though OpenAI’s terms of service claim the service is addressed only to users aged 13 and up.
The Italians additionally took a 20 March data breach at the service into account. This incident resulted from a bug in the redis-py open source library that exposed active user’s chat histories to other users in some circumstances, and additionally exposed payment information of approximately 1.2% of ChatGPT Plus service subscribers during a nine-hour window. This data included first and last names, email and postal addresses, and limited credit card data.
Under the European Union (EU) General Data Protection Regulation (GDPR), OpenAI’s designated representative in the European Economic Area (EEA) will have 20 days to notify the GPDP of measures implemented to comply with the order, or face fines of up to €20m or 4% of worldwide annual turnover.
The decision makes Italy the first country to have issued any kind of ban or restriction on the use of ChatGPT – although it is unavailable in several countries, including China, Iran, North Korea and Russia, because OpenAI has not made it available there.
Commitment to privacy
In a statement, ChatGPT said it had disabled access to the service in Italy as a result, but hoped to have it back online soon. It said it was “committed to protecting people’s privacy” and that to the best of its knowledge, it operates in compliance with GDPR and other privacy laws and regulations.
It added that it has been working to reduce the use of personal data in training ChatGPT because it wanted the system to learn about the world in general, not private individuals.
Time for a reset
At about the same time as the Italian authorities were putting the finishing touches to their announcement, a group of more than 1,000 AI experts and other figures in the tech industry, among them Apple co-founder Steve Wozniak and increasingly-erratic social media baron Elon Musk, put their names to an open letter calling for a temporary moratorium on the creation and development of AI models such as the large language model (LLM) behind ChatGPT.
In their letter, the signatories argued that the race to deploy AIs has become out of control, and that a pause was necessary to allow humanity to determine if such systems will truly have beneficial effects, and manageable risks. They called on governments to step in, should the industry not hold back voluntarily.
Michael Covington, vice-president of strategy at Jamf, was among many who applauded the GPDP’s decision on similar grounds. “I am encouraged when I see regulators stand up and enforce written policies that were designed to protect individual privacy, something that we at Jamf consider to be a fundamental human right,” he said.
“ChatGPT has been experiencing massive growth, and this growth has happened with near-zero guardrails. OpenAI has dealt with a few issues, like a lack of data handling policies and well-publicised data breaches. I see value in forcing a reset so this truly innovative technology can develop in a more controlled fashion.
“That said, I get concerned when I see attempts to regulate common sense and force one ‘truth’ over another,” added Covington. “At Jamf, we believe in educating users about data privacy, and empowering them with more control and decision-making authority over what data they are willing to share with third parties.
“Restricting the technology out of fear for users giving too much to any AI service could stunt the growth of tools like ChatGPT, which has incredible potential to transform the ways we work,” he said.
“Furthermore, there is a lot of misinformation on the internet today, but without knowing how the world will monitor for ‘facts’, we have to respect freedom of speech, and that includes factual inaccuracies. Let the market decide which AI engine is most reliable, but don’t silence the tools out of fear for inaccuracies, especially as this exciting technology is in its infancy.”
Security concerns will get worse
Dan Shiebler, head of machine learning at Abnormal Security, said security concerns over LLMs would likely get “substantially worse” as the models become more closely integrated with APIs and the public internet, something that to his mind is being demonstrated by OpenAI’s recent implementation of support for ChatGPT plugins.
He speculated that more such actions may follow. “The EU in general has shown itself to be pretty quick to act on tech regulation – GDPR was a major innovation – so I’d expect to see more discussion of regulation from other member countries and potentially the EU itself,” he said.
Shiebler said the ban was unlikely to have much impact on the development of AI, simply because this can be done very flexibly from any jurisdiction. However, should bans or restrictions start to spread across the EU or US, this would be a much larger hindrance.
However, he said, the while the UK should “absolutely” look into concerns over potential malicious use cases for LLMs, adopting a similar policy would not be helpful. “An immediate blanket ban is more likely to exclude the UK from the conversation than anything else,” he pointed out.
WithSecure’s Andrew Patel – who has conducted extensive research into the LLMs that underpin ChatGPT – agreed, saying that Italy’s ban would have little impact on the ongoing development of AI systems, and furthermore, could render future models substantially more dangerous to Italian-speakers.
“The datasets used to train these models already contain a great deal of examples of Italian,” he said. “If anything, by shutting off Italian input to future models will cause such models to be mildly worse for Italian inputs than for others. That’s not a great situation to be in.”
Blatant overreaction
Asked if he thought the Italian authorities have perhaps gone too far, Patel said simply: “Yes, this is an overreaction.”
Describing ChatGPT as a “natural” technological progression, Patel said that if the GPDP’s issue was really to do with Italian citizens interacting with an invasive US technology company, it would have taken similar actions against other US-based platforms.
“The fact that ChatGPT is hosted by a US company should not be a factor,” he said. “Nor should concerns that AI might take over the world.
Patel argued that by restricting the ability of every Italian citizen to access ChatGPT, Italy was putting itself at a substantial disadvantage.
“ChatGPT is a useful tool that enables creativity and productivity,” he said. “By shutting it off, Italy has cut off perhaps the most important tool available to our generation. All companies have security concerns, and of course employees should be instructed to not provide ChatGPT and similar systems with company-sensitive data. [But] such policies should be controlled by individual organisations and not by the host country.”
Read more about ChatGPT
- ChatGPT has shown it can produce code. It can also identify bugs and even figure out what a code snippet is trying to do.
- As OpenAI technology matures, ChatGPT could help close cyber security's talent gap and alleviate its rampant burnout problem. Learn about these and other potential benefits.
- HR vendors are cautiously adopting ChatGPT to summarize text and create job descriptions, but concerns about bias and inaccuracy limit its use, for now.
Erick Galinkin, principal AI researcher at Rapid7, said it has been known for years that LLMs memorise training data, and there have already been countless examples of generative models reproducing examples from their training data, so ChatGPT could not have come as a surprise to the GPDP.
Ultimately, he said, the GPDP’s concerns seem to stem more from data collection than from actual training and deployment of LLMs, so what the industry really needs to be addressing is how sensitive data makes it into training data, and how it’s collected.
“As Bender et al cover well in their paper, On the dangers of stochastic parrots, these models do have real privacy risks that have been well known to the AI ethics and AI security community for years now,” said Galinkin.
“We cannot put the toothpaste back in the tube, so to speak. ‘Banning’ these models – whatever that term means in this context – is simply encouraging more perfidy on the part of these companies to restrict access and concentrates more power in the hands of tech giants who are able to sink the money into training such models.
“Rather, we should be looking for more openness around what data is collected, how it is collected and how the models are trained,” he said.