Home ai Slack Faces Backlash Over Lack of Transparency in AI Data Usage

Slack Faces Backlash Over Lack of Transparency in AI Data Usage

Slack, the popular chat platform owned by Salesforce, is facing backlash from users over its use of customer data to train AI services. Users were surprised to learn that they had to email the company to opt out of data usage, a fact buried in an outdated privacy policy. The issue gained attention when a frustrated user posted about it on a developer community site, sparking a viral response.

The controversy centers around Slack’s AI training and its lack of transparency. Users discovered that Slack automatically opts them in to AI training and requires them to take action to opt out. This revelation raised questions about Slack’s commitment to user control and privacy.

One of the main concerns is the absence of clear information about Slack’s AI services in its privacy principles. Users wondered why the privacy policy did not mention the product “Slack AI” by name or clarify if the policy applied to it. Additionally, the use of terms like “global models” and “AI models” added to the confusion.

While the shock may be new, the terms themselves are not. According to archived pages, the terms have been in place since at least September 2023. However, the lack of transparency has left users feeling uneasy.

According to Slack’s privacy policy, customer data is used to train “global models” that power channel and emoji recommendations and search results. The company claims that its usage of data has specific limits and that the models do not have access to or reproduce customer data. However, the policy does not address the broader scope of Slack’s AI training plans.

Interestingly, Slack clarified that its separately purchased add-on, Slack AI, does not train language models using customer data. This add-on uses large language models hosted within Slack’s infrastructure, ensuring that customer data remains in-house and is not shared with any external providers.

Acknowledging the confusion, a Slack engineer conceded that the company needs to update its privacy principles to reflect how they apply to Slack AI. The engineer explained that the terms were written before Slack AI existed and primarily focused on search and recommendations. As the situation unfolds, it will be essential for Slack to clarify its current use of AI and update its terms accordingly.

The controversy surrounding Slack serves as a reminder that user privacy should not be an afterthought in AI development. Companies must clearly outline how and when data is used, or if it is not used at all, in their terms of service. Transparency and user control are crucial in maintaining trust in the fast-moving world of AI.

In conclusion, the issues at Slack highlight the importance of clear communication and transparency when it comes to data usage in AI services. The company must update its policies to address user concerns and clarify its approach to AI training. User privacy should always be a top priority in the development and implementation of AI technologies.

Exit mobile version