There’s a lot of activity going on around generative AI. This is a hot topic for us data privacy practitioners because it brings new challenges to the protection of personal data. People say it’s sad, but we’re really excited.
The rapid growth of AI systems that generate and process data, including personal data, means that it is critical that companies using such systems do so in accordance with their GDPR obligations.
Earlier this month, the UK’s data protection watchdog, the Information Commissioner’s Office (ICO), launched a consultation series on Generative AI to analyze how data protection law should apply to the development and use of AI. Generative AI is used across industries to create new content such as music, artwork, literature, and source code. The first consultation will consider legislation around training generative AI models using personal data collected from the internet. Stephen Almond, Executive Director of Regulatory Risk at the ICO, said: “If developed and deployed responsibly, the impact of generative AI can be transformative for society. It helps protect freedom.”
The ICO is seeking input from a range of stakeholders, including developers and users of generative AI, as well as legal advisers and consultants working in this field. Initial consultations will be accepted until March 1, 2024.
Following this, the next consultation will focus on the accuracy of generative AI products. This is scheduled to start in his first six months of 2024.
“If developed and deployed responsibly, the impact of generative AI can be transformative for society. It helps protect freedom.”