Type to search

ChatGPT Triggers Push Back On AI Innovation

ChatGPT Triggers Push Back On AI

The AI conversation has been building significant momentum over the past five or so years, but will we look back on March 2023 as the moment that interest in this area really broke through into the mainstream consciousness from its previous specialised domains of technology, academia, legal policy, and sci-fi interest? Max Tegmark might have something to say about this, but in my view the release of GPT-4 on 14th March has lifted the AI conversation to a whole new plane.

FRANCE-TECHNOLOGY-IT-AI

Public awareness and interest in AI have been cresting since the launch of OpenAI’s chatbot, ChatGPT, in November 2022, but the release of GPT-4, the multimodal large language model that underpins the bot, has caused unprecedented reactions that in turn have captured worldwide press and media attention.

I am referring to two reactions in particular.

Heidi Klum's Hallowe'en Party 2022

Musk and FOLI call for AI moratorium

First, on 22nd March the Future of Life Institute (“FOLI”) published an open letter calling for all AI labs “to immediately pause for at least 6 months the training of AI systems more powerful than GT-4”. The backdrop to this demand is, according to the signatories, that “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control”. This is sobering, seeing that the signatories observe that “AI systems with human-competitive intelligence can pose profound risks to society and humanity”. And in justifying their call for a moratorium, the signatures replay the words of OpenAI themselves:

“OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.”

In the era of “Cancel Culture”, it would be surprising if there wasn’t a backlash against AI seeing how much attention ChatGPT has been receiving, but the newsworthiness of FOLI’s letter derives not simply from its contents, but from the identities of the signatories. Of course, Elon Musk and Steve Wozniak were bound to attract the most interest, due to their fame and pioneering roles in the world of technology, but a quick look at the list shows that many leading thinkers in the AI space share the points of view. Many serious thinkers in the AI field are expressing very serious views.

But for balance it’s worth noting that the letter has not been universally embraced in the AI leadership community and, as others have noted, FOLI is funded by the Musk Foundation. Also, Mr Musk was a co-founder of OpenAI, but he left that organisation in 2018 due to a “conflict of interest”.

Italy data protection regulator bans ChatGPT’s use of Italian data

Second, on 31st March the GPDP, the Italian data protection regulator, imposed a temporary ban on ChatGPT’s processing of Italian users’ data. The GPDP seems to be on an anti-AI mission at the moment, as this is the second AI ban that it has imposed this year, following one imposed on another chatbot in February.

The English language report on the GPDP’s website is not comprehensive and the translation might not do justice to the reasoning applied, but the key issues are these:

On 20th March a data breach was reported, which affected the conversations of ChatGPT users and information on subscriber payments. Presumably this is a reference to a personal data breach within the meaning of Article 4.12 of the EU General Data Protection Regulation. No information is provided to users and data subjects whose data are collected by OpenAI. Presumably this is a reference to the GDPR transparency rules in Articles 12, 13 and 14. There appears to be no legal basis underpinning “the massive collection and processing of personal data to ‘train’ the algorithms on which the platform relies”. This must be a reference to the rules on the legitimacy of data processing in Articles 6 to 9. “The information made available by ChatGPT does not always match factual circumstances, so that inaccurate personal data are processed”. This must be a reference to the accuracy principle in Article 5.1.d. There isn’t an age verification mechanism for use of the platform by children. This must be a reference to Article 8.2, but the GPDP also expresses concern about the appropriateness of the chatbot’s responses received by children.

On the face of the English translation, strictly speaking it seems that the GPDP isn’t banning ChatGPT. Instead, the ban relates to OpenAI’s use of Italian users’ personal data. This probably means that the GPDP intends that OpenAI should not use the personal data of persons who are protected by the Italian implementation of the GDPR for Machine Learning purposes, nor should it process the personal data of any users of the chatbot who are similarly protected, which in either situation will mean more than Italian nationals. However, in the later situation – that is, use of the chatbot itself – the personal data of chatbot users is likely to be very limited in scope, i.e., limited to subscriber data (unless the user has been conducting vanity searches about themselves).

But to all intents and purposes, the GPDP’s order constitutes a de facto ban on the use of ChatGPT in Italy, if it truly does extend to prohibiting the processing of users’ personal data, as there’s no way of using the system without logging in, which requires the submission of an email address (unless the user selects the OAuth log in options provided by Google and Microsoft), which is likely to be personal data in many cases. Moreover, it might be impossible for OpenAI to separate out training data that is covered by Italy data protection law from its model (or it might not be willing to do so for strategic reasons, to protect its legal position).

Thus, until the situation is resolved, ChatGPT is essentially banned in Italy. It may not be a ban de jure, but a de facto ban can achieve the same results: technology is stopped in its tracks.

What should the regulatory framework look like?

These two developments shine a spotlight on the epicentre of issues that the AI stakeholder community need to get to grips with. Essentially, they are concerned with the nature of the regulatory framework that should apply and the extent of independent oversight that the developers of AI should accept as part of a licence to operate.

Plainly, if the risks are as high as FOLI suggests, it would be wholly unacceptable for the technology industry to self-regulate its approach to AI, or to allow power to concentrate in the hands of a small group of companies.

EU proposed approached to regulating AI

The EU is already grappling with these questions and proposes an AI Regulation that will tier AI systems by reference to their risk levels, with everything overseen by a new pan-EU Artificial Intelligence Board and a system of national regulators. If this risk framework was to be applied to ChatGPT in its function as a chatbot, it would be regarded as a limited risk system.

BRITAIN-POLITICS-EU-BREXIT

UK approach to regulating AI

The UK is taking a different approach. Rather than creating a new regulatory system, the current system of regulators will be required to consider AI issues within their day-to-day regulatory functions, guided by five new regulatory principles that will eventually be codified in legislation. The current UK government’s overriding concern is to promote innovation and growth, to secure the country a global leadership role as an AI powerhouse. The EU wants that too, so the real issue will be how to balance growth against competing interests, such as data protection and other rights and interests.

The problem posed by FOLI’s letter is that for all the talk about growth, innovation and balancing of rights, we might not be addressing the most existential issue of them all, which is that AI might pose a threat to humankind itself. In that sense, as Mr Tegmark has informed us, we could be at the cusp of developing a new lifeform, Life 3.0. If that is true, then perhaps the enormity of the issues has not been fully realised within the debates that are shaping the regulatory frameworks. Like with the scientific developments that led to the development of nuclear weaponry, are we ignoring the risk that we might be taking the pathway of technological progression that can ultimately threaten our existence and which we are unable to unwind?

A full-scale figure of a terminator robo

Moratorium unlikely?

But even if we are aware of the enormity of the choice we face, what will we do about it in a world where humankind seems destined to threaten its own interests even when all conceivable alarm bells have been sounded? Is our progression towards an AI dystopian future inevitable, in the same way that a progression to a climate catastrophe seems impossible to avoid? Perhaps the argument that will most likely prevent an AI moratorium in the West will be that China, Russia, India etc., will carry on regardless. Moreover, we do not need to look to the West’s rivals to identify barriers to an AI moratorium: it is not at all surprising that the CEOs of Microsoft and Google have not added their signatures to FOLI’s letter.

Yet, none of these arguments by themselves render the GPDP’s position on ChatGPT unassailable. Looking at the list of issues that have been highlighted, at least one of them seems suspect following a modicum of analysis: that is, using personal data for Machine Learning purposes can have a legal basis within the legitimate interest criterion of GDPR Article 6, or within the manifestly public information criterion of Article 9, insofar as special categories of personal data are concerned. Indeed, the presence of a legitimate interest for mass processing of personal data by technology companies was implicitly recognised by the Court of Justice of the EU in the right to be forgotten case involving Google.

It may also be possible that the GPDP has stretched the rules on parental consent beyond their elasticity, by extending the statutory purpose to the quality and content of information that a child may receive through a chatbot, in contrast to how a chatbot processes that child’s personal data. The transparency rules and accuracy rules also have limitations if the wording of the EU GDPR is taken at face value. Finally, there is the question of the exercise of the regulatory discretion: if a chatbot would be regarded as a low risk under the proposed EU AI Regulation, does it really amount to a fair exercise of discretion to ban it under the GDPR?

Should privacy be the only voice in the room?

The issues raised by the GPDP will presumably result in a referral of the case to the Court of Justice and we will learn the answers in the fullness of time, but the GPDP has – whether intentionally or unintentionally – shone a much needed spotlight on the issue of primacy of stakeholders (as has FOLI). If a system is low risk when viewed through the lens of specialist AI legislation and if AI can be a power for good, then why should a data protection regulator have primacy over the issue of a chatbot’s accessibility and use and be able to implement a ban on its own volition?

Of course, data protection is a massively important issue, but it is not the only issue that AI engages. Perhaps, given the state of the regulatory framework, there is no other voice in the room with effective regulatory power, but if data protection is the only area that currently has real regulatory skin in the game, that would seem to be a great shame.