The artificial intelligence robot ChatGPT developed by the technology company OpenAI has been banned by the Italian government for violating data storage regulations and failing to verify users' ages.
The artificial intelligence-focused startup OpenAI's ChatGPT, which has been popular in recent months, has been banned by Italian authorities. In a statement from the Italian Data Protection Authority, it was stated that the application did not respect user data and failed to verify users' ages. The decision, which will come into effect immediately, will temporarily limit OpenAI's processing of Italian users' data, according to the statement. The authority also announced that an investigation process has been initiated regarding the application. However, the authority also revealed on March 20 that the application experienced a data breach, which included user conversations and payment information.
Meanwhile, the Italian judiciary ruled that it was illegal for the platform to collect and store personal data in bulk for the purpose of 'training' the algorithms that form the basis of the platform's operation. Additionally, since there is no way to verify users' ages, the application exposes "minors to responses that are definitely not appropriate in comparison to their level of development and awareness," it was emphasized.
In case of a violation of the relevant decision, the company will be fined up to 20 million euros or up to 4% of its annual revenue. The banning of ChatGPT in Italy comes after the European police organization Europol warned that criminals could use the application for everything from phishing to malicious software to other cybercrimes. ChatGPT, created by the American startup OpenAI and supported by Microsoft, can answer difficult questions clearly, write code, songs or essays, and even pass difficult exams for students.