As a general note, and as mentioned above, the Italian DPA has held its legal analysis close to its chest, making it difficult for external bystanders to analyze the legal implications in detail. With that caveat, below are some of our initial reflections.
From one perspective, one might be tempted to say that the outcome of the process so far has been a touch anti-climactic, starting out as it has with an outright ban on one of the most advanced AI solutions the world has seen, only to be reduced to a discussion on (mostly) fulfilling information requirements – a common theme for any processing tools. While measures related to information requirements and data subjects' rights are by no means an easy feat to implement, they are seldom infeasible and it should at least in theory be more than possible for AI providers to implement within relatively short deadlines.
The emphasis on requirements pertaining to information obligations and facilitation of data subjects' rights is perhaps also a bit surprising given the wording in initial the press release by the DPA which stated that the question of "legal basis" was more important. As stated, the topic of "legal basis" was covered by 1 measure merely to remove any reference to "contract" as legal basis (cf. GDPR article 6 (1) (b)), and instead for OpenAI rely on "consent" or "legitimate interest" (cf. GDPR article 6 (1) (a) and (f)). In our view the statement in the press release appears correct, at least we believe the question of "legal basis" is legally more interesting.
Relying on consent requires an actual option to withdraw the consent, making it hard to see how that could work in practice related to a huge AI model. As such we would expect "legitimate interests" to be the most practical and applicable legal basis. However, it would be interesting to know as to how exactly OpenAI may apply the balancing test underpinning any use of the "legitimate interest" basis, i.e. the weighing of the interests of the business against the protection of the individual's privacy. Adding to the complexity is also the topic of inaccurate personal data, which was mentioned initially by the Italian DPA but is not (directly at least) mentioned in the required measures from April 11. For example: how can you avoid inaccuracy and responses that do not "match factual circumstances" if the user specifically asks for a fictitious response about a person? Follow-up question: would that fictitious response necessarily be to the detriment of an actual person? This opens up for a broader discussion on to what extent the application of data protection principles is possible to conversational AI solutions such as ChatGPT.
As for the age verification tool requirement, one may argue that there is an abundance of solutions on the internet that do not have an age verification mechanism which may expose children "to receiving responses that are absolutely inappropriate to their age and awareness". Several websites come to mind – no need to mention names here – which may lead to the conclusion that the decision by the Italian DPA appears somewhat arbitrary and even overreaching on this point.