- OpenTools' Newsletter
- Posts
- 😵💫Slack's sneaky AI training
😵💫Slack's sneaky AI training

Reading time: 5 minutes
Can you help us, please?We're considering offering a FREE prompt guide written by a professional prompt engineer. Let us know if you'd be interested. |
Today we will discuss-
🫢Ex-OpenAI employee breaks silence on safety concerns
🌎UK expands AI safety efforts
😾Slack's sneaky AI training tactics spark outcry
⚙️9 amazing AI tools you might not have heard of
All this and more - Let's dive in!

👩🍳What’s cooking in the newsroom?
OpenAI safety leader quits, citing neglected safety culture
♨️News - A former senior employee at OpenAI has alleged that the company is prioritizing the development of shiny products over ensuring safety.
Jan Leike, who served as the co-head of the company's superalignment team — responsible for ensuring AI systems align with human values — left the company due to a disagreement over this shift in priorities.
😮A warning? Leike outlined the reasons behind his resignation in a post on X, where he pointed out that safety culture was being sidelined in favor of developing shiny products, adding that it was becoming "harder and harder" for his team to conduct their research.
Leike stated that OpenAI should allocate more resources to address critical issues such as safety, social impact, confidentiality, and security in its next generation models. He highlighted that these problems are quite hard to get right and expressed concern that the company isn't on the right trajectory to solve them.
He also emphasized the inherent risks involved in creating machines smarter than humans and stressed the importance of OpenAI shifting towards a safety-first approach to fulfill its responsibility to humanity.
Leike's remarks are further underscored by the recent departure of OpenAI's chief scientist, Ilya Sutskever. The hashtag "#WhatDidIlyaSee" was a trending topic on X over the weekend, fueling speculation about what top leaders at OpenAI might know.
🙄See also - Despite attempts by OpenAI's CEO Sam Altman and president and co-founder Greg Brockman to address the public speculation sparked by Leike's words, their efforts did not succeed, evident from the negative reaction to their statements.
UK to open San Francisco office in effort to bolster AI safety initiatives
🌉News - The United Kingdom is doubling down on AI safety with an expansion of its efforts. The AI Safety Institute, founded in November 2023 to identify and mitigate risks within AI platforms, is set to open a new branch in San Francisco.
The goal is to move closer to the heart of AI advancement, as the Bay Area hosts key players such as OpenAI, Anthropic, Google, and Meta, which are developing foundational AI technologies.
👩🏻🔬What's more? Michelle Donelan, the UK Secretary of State for Science, Innovation, and Technology, stated that establishing a presence in San Francisco would grant access to the headquarters of numerous AI companies. She noted that while some of these companies already have bases in the United Kingdom, having a presence in San Francisco would provide access to additional talent and enable closer collaboration with the United States.
Being closer to the epicenter will not only aid in understanding ongoing developments but also enhance visibility for the UK with these firms, which is crucial as the UK sees AI technology as a significant opportunity for economic growth and investment.
🥸What's the most important point here? It's intriguing that despite signing an MOU with the US for AI safety initiatives, the UK is opting to establish a presence in the US to address the matter.
Slack faces criticism for sneakily training ML and AI models with user data
🫠News - In light of concerns regarding big tech's data practices for AI training, the revelation that Slack has been using user messages, files, and other content to train its AI models without explicit consent has triggered widespread unease among users and privacy advocates.
This issue came to light when someone posted about it on Hacker News, a popular community site for developers, and the post quickly went viral. The user shared a link to Slack's privacy principles, sparking a conversation among current Slack users who were unaware that they were automatically opted in to AI training and needed to email a specific address to opt out.
What's even more striking is that while the shock may feel fresh, the terms themselves are not. Pages on the Internet Archive show that these terms have been in place since at least September 2023.
🤨So what has Slack been using the data for? According to the privacy policy, Slack uses customer data for training "global models" that drive channel and emoji recommendations as well as search results. However, the policy doesn't seem to cover the full extent of the company's AI training initiatives and broader plans.
🙃In conclusion - This incident is a reminder that as AI technology advances, user privacy must not be overlooked or treated as an afterthought. It also highlights the importance of companies being upfront in their terms of service about how and when data is used, or not, to ensure transparency and trust.
Tell us what you think |

🙆🏻♀️What else is happening?

🤓Casual AI banter
The concept of intelligent machines dates back much further than commonly believed. While modern AI began developing in the 1950s, its conceptual origins trace back to the late 17th century.
The first true conceptualization of AI was by German mathematician and philosopher Gottfried Wilhelm Leibniz. At 20, Leibniz proposed a theory that human thoughts are quantifiable and can be broken down into basic concepts, which could be replicated by a machine to generate ideas.
He called this theoretical machine "the great instrument of reason," envisioning it as capable of answering any question.

👩🏼🚒Discover mind-blowing AI tools
Voice-Swap - A tool that allows producers, artists, and writers to change their singing voice to match the style of chart-topping singers ($6.99/month)
PitchPower - A software service that helps consultants and agencies create powerful proposals quickly ($19.99/month)
Flowrite - A Chrome extension that helps users write emails and messages 5x faster ($4/month)
Sudowrite - A writing AI tool that helps writers overcome writer's block and get feedback on their work ($10/month)
Keyframes Studio - An all-in-one platform for creating, editing, and repurposing videos for social media platforms ($9/month)
SlangThesaurus - Allows you to effortlessly turn basic text into trendy internet slang, with customizable slang levels from 1 to 5 (Free)
Vidnoz - A tool that allows users to create talking avatar videos by simply selecting a photo and writing a script (Free)
MapsGPT - Helps users quickly find and explore interesting places near them using AI (Free)
Smodin - A suite of tools designed to help students and writers save time and improve their work ($15/month)

👨🏻🎨Mythbuster
Myth - AI can replace human creativity and originality
Fact - While AI can generate art, music, and literature, it lacks the depth of human creativity and originality. AI-generated content is often based on existing patterns and data, rather than genuine inspiration and imagination.
