Some encountered it at home – most people got to know Slack in the context of the COVID-19 pandemic ; others were implementing it at the workplace, and nonetheless Slack managed to find its niche in our machines. Facebook’s messaging service, Discord, Microsoft Teams, and Slack are examples of some of the most widely used collaborative platforms.
However, even in 2023 when its interface was changed, added new features and made more comfortable to work with, Slack had been using the conversations and the documents of the users to train the AI. What makes people have no opportunity to have a private life anymore is that any conversation that previously took place behind closed doors has become the key to learning the company’s algorithms. This is enough to anger its users who never had the chance to opt-in for this, this kind of ‘draft’ into an AI training camp. However, they never received any notification or any invite to give consent to donate their body parts.
In violation of customers’ rights to privacy and autonony, Slack lures customers into AI training without their knowledge.
In its default setting, Slack has the capability of taking users’ messages, files and data and feeding it into the corporate machine learning model. The accumulated content is valuable, and is used in features like channel suggestions, search results or autocompletion. Choosing this default option was also problematic because customers were not informed beforehand.
On X. com, there are threads: a man who asks whether he hallucinates or does Slack use his data and Obviously, Slack has been generating shares and comments. In response to this, Slack regurgitated a privacy policy and stated that yes, its users’ data was fed to feed AI tools or applications. But, as if in a desire to exonerate the absence of harmony or seal of approval, Slack sought solace in the fact that the above data resulted in only open-source AI tools.
On top of that, if people do not wish their data to be processed, they cannot simply send the request and their wish will be granted, but rather have to apply to the relevant authority. This is because only workspace admins can request that their data should not inserted into the database through a request via email. There is no such easy option provided to the individual user or here people are not satisfied with this unilateral decision made by the options given above are. It is categorically described as a waste disregarding the issue of confidentiality by a lawyer.
In a sad note, this utilization of user data for AI is not unheard of. For instance, Squarespace have been known to crawl content from websites under the pretext of training their software and algorithms; a process they can only cease upon one’s protest. It is always useful to have a default option that is not quite so viable but still within reasonable doubt, as it is the case here. When it comes to WordPress, they have been selling contents developed by different users to artificial intelligence laboratories starting from February 2024.
Reddit most recently opened a conversation with Open AI to allow users entry to the subreddit that offers the content. This puts it in the same bracket as Google’s agreement signed few months ago for 60 million euros.