The Italian regulator sets strict guidelines for OpenAI’s ChatGPT, mandating increased transparency and age verification measures to protect user privacy before lifting restrictions.
Italy’s data protection agency, known as Garante, has specified the actions that OpenAI must take to revoke an order imposed on ChatGPT. The order was issued in March 2023. The watchdog suspected the artificial intelligence (AI) chatbot service of violating the European Union’s General Data Protection Regulation (GDPR) and mandated the United States-based firm to halt the processing of data belonging to individuals residing in the country.
The regulator’s press release mandates that OpenAI must increase its transparency and issue an information notice comprehensively outlining its data processing practices. Additionally, the statement requires OpenAI to implement age-gating measures immediately to prevent minors from accessing its technology and adopt more stringent age verification methods.
OpenAI must specify the legal grounds it relies upon for processing individuals’ data to train its AI, and it cannot rely on contract performance. This means that OpenAI must choose between obtaining user consent or relying on legitimate interests. OpenAI’s privacy policy currently references three legal bases but appears to give more weight to the performance of a contract when providing services such as ChatGPT.
Furthermore, OpenAI must enable users and non-users to exercise their rights regarding their personal data, including requesting corrections for any misinformation generated by ChatGPT or deleting their data.
In addition, the regulatory agency mandated that OpenAI allow users to object to processing their data to train its algorithms. Also, OpenAI is required to conduct an awareness campaign in Italy to inform individuals that their information is being processed to train its AIs.
Garante has set a deadline of April 30 for OpenAI to complete most of these tasks. OpenAI has been granted additional time to comply with the extra demand of migrating from the existing, age-gating child safety technology to a more resilient age verification system.
Related: ‘ChatGPT-like personal AI’ can now be run locally, Musk warns ‘singularity is near’
Specifically, OpenAI has until May 31 to submit a plan outlining the implementation of age verification technology that screens out users under 13 years old (and those aged 13 to 18 who have not obtained parental consent). The deadline for deploying this more robust system is set for Sept. 30.
On Friday, March 31, following the concerns raised by the national data protection agency about possible privacy violations and failure to verify the age of users, Microsoft-backed OpenAI took ChatGPT offline in Italy.
Magazine: Best and worst countries for crypto taxes — plus crypto tax tips