Advertising

Meta Pauses Plans to Train AI Using User Data in Europe and UK

Meta, the parent company of Facebook, has announced that it will halt its plans to use user data from the European Union and the U.K. to train its artificial intelligence (AI) systems. This decision comes in response to concerns raised by the Irish Data Protection Commission (DPC) and the U.K.’s Information Commissioner’s Office (ICO). Both regulatory bodies had requested that Meta pause its plans until they could address the concerns surrounding the use of user-generated content for training AI models.

The GDPR regulations in Europe have posed challenges for Meta and other companies seeking to improve their AI systems using user-generated data. While Meta has been using such data in the U.S., it faced legal obstacles in Europe due to the stricter privacy regulations.

However, Meta recently informed users about an upcoming change to its privacy policy that would allow it to use public content on Facebook and Instagram to train its AI models. This content includes comments, interactions with companies, status updates, photos, and associated captions. Meta argued that it needed this data to reflect the diverse languages, geography, and cultural references of European users.

This change was scheduled to take effect on June 26, but it faced opposition from the privacy activist organization NOYB, which filed 11 complaints across EU countries. NOYB argued that Meta’s actions violated various aspects of GDPR, particularly regarding opt-in and opt-out processes for data processing.

Meta had relied on the “legitimate interests” provision in GDPR to justify its actions, as it had previously done for targeted advertising. However, regulators were likely to intervene given the difficulties users faced in opting out of data usage. Meta sent over 2 billion notifications about the changes to users, but these notifications appeared alongside other standard notifications, making them easy to miss. Furthermore, users were required to fill out an objection form rather than being able to opt-out directly.

In response to the requests from the DPC and ICO, Meta has decided to pause its plans. The company’s policy communications manager, Matt Pollard, referred to the existing blog post that explained Meta’s belief that using the “legitimate interests” basis provided the most appropriate balance for processing public data at scale.

While this pause may delay Meta’s plans, it highlights the broader issue of companies’ reliance on user data for training AI models. Big Tech companies like Reddit and Google have also entered licensing agreements to access user data for AI training. However, these efforts often lack clear opt-in processes, and opting out can be a complicated and burdensome task.

Meta’s decision to pause its plans reflects the need to navigate existing legislation while leveraging user data. It is likely that Meta will revisit its plans after consultation with the DPC and ICO, potentially introducing a different user-permission process.

The ICO’s executive director for regulatory risk, Stephen Almond, emphasized the importance of trust and privacy rights in the development of generative AI. The ICO will continue to monitor major developers of AI, including Meta, to ensure that safeguards are in place to protect users’ information rights.