Altman's AI departure
Reading time: 5 minutes
Today we will discuss-
🙅♂️Sam Altman declines OpenAI's return offer
Meta dissolves its Responsible AI team
🤝U.S. ambassador urges partnership with India on AI regulation
😱9 amazing AI tools you might not have heard of
All this and more - Let's dive in!
👩🍳What’s cooking in the newsroom?
Sam Altman chooses not to return to OpenAI, joins Microsoft to lead new AI team
👨💻News - Sam Altman, the former CEO of OpenAI will be joining Microsoft to head a new team dedicated to AI.
On 20 November, Microsoft CEO and Chairman Satya Nadella revealed that Sam Altman, along with OpenAI co-founder Greg Brockman and other former OpenAI staff, will be in charge of a "new advanced AI research team" at Microsoft.
🧐But what led to this?
After Sam Altman was fired on Friday, Mira Murati, OpenAI's CTO, was appointed as the interim CEO.
Soon after, Greg Brockman, OpenAI's president and co-founder, along with three senior researchers, submitted their resignations.
Meanwhile, reports surfaced that Altman had recently traveled to the Middle East to raise funds for an AI chip project code-named Tigris, with the goal of competing against Nvidia.
By 18 November, several OpenAI employees voiced support for Altman on X, by expressing interest in joining Altman to establish a new company. This led OpenAI's top investors to discuss damage control, including bringing Altman back as CEO.
On the night of November 19, Ilya Sutskever, co-founder and board director of OpenAI, informed the staff that Altman would not be returning after a weekend of discussions with the board of directors, company leaders, and investors.
Following this, OpenAI appointed former Twitch boss Emmett Shear as its CEO, marking the third change in leadership in one weekend. However, investors fear that Altman's absence may lead to a widespread departure of talent from the company.
Meta dissolves its Responsible AI team as it shifts focus to generative AI
👥News - Meta has dissolved its Responsible AI (RAI) team, which was focused on overseeing the safety of its AI products.
Notably, the RAI team, formed in 2019, also underwent a prior restructuring earlier this year, involving layoffs, resulting in a significantly reduced team size. Moreover, the team had limited independence, and its ideas had to go through lengthy discussions with stakeholders before being put into action.
🤔What prompted this move? As part of the broader trend in the tech industry, companies are investing heavily in machine learning to stay competitive in the AI race. Meta, like other big tech firms, has been striving to keep pace with the rapid advancements in AI. Due to this, Meta made the strategic decision to allocate more resources to generative AI.
In line with this, the majority of RAI members will join Meta's generative AI product team, while others will contribute to the development of Meta's AI infrastructure.
🤨Does this mean Meta is completely stepping back from Responsible AI? No, not entirely. According to Meta's Jon Carvill, the company will still prioritize and invest in safe and responsible AI development. Furthermore, despite the team's split, its members will continue to contribute to relevant cross-Meta efforts focused on responsible AI development and use.
U.S. ambassador Garcetti calls for India-U.S. collaboration on AI regulation
🤓News - On 20 November, Eric Garcetti, the US Ambassador to India, urged New Delhi and Washington to engage in in-depth discussions on AI regulation. He highlighted the potential for this to serve as a blueprint for a mutually beneficial relationship between the two democracies.
Garcetti added that while both nations had engaged in discussions on AI, there has been no formal proposal from either side yet.
😀What's more? Garcetti emphasized the importance of addressing AI risks to avoid potentially disastrous outcomes. He noted that while last week's India-US 2+2 dialogue marked great strides in their defense partnership, the two nations should focus more on strengthening science and technology partnerships to use technology for global benefit rather than division or harm.
👬How can this be mutually beneficial though? The Indian government's approach has swung between fostering innovation through a non-regulatory stance and a more cautious emphasis on mitigating user harm. Meanwhile, the recent executive AI order by the Biden administration is perceived by many as restrictive to new competitors and innovation. Thus, finding common ground that promotes innovation and addresses concerns about user harm can enhance the collaboration's advantages for both sides.
👩🏼💻What else is happening?
Researchers at the University of Maryland developed an AI-powered app called "iNaturalist" that could capture, identify and catalog various plant and animal species.
In a 2017 experiment, Facebook developed AI chatbots to experiment with natural language processing. While the bots communicated in a language that seemed efficient for them, it was foreign to human observers. The experiment raised concerns about ethics and had to be shutdown.
The Axie Infinity game became popular in the Philippines during the pandemic for allowing players to earn from home. Players made an income by creating NFT game farms, also increasing their financial literacy.
👩🏼🚒Discover mind-blowing AI tools
WithUI - A platform that allows you to build AI mini-apps without any coding knowledge
InstaNews.ai - An AI-powered service that transforms Instagram posts into engaging news articles for websites
RoomGPT - Allows users to take a picture of their room and generate a new version of the same
HotBall - A business consulting tool powered by AI that helps entrepreneurs turn their ideas into a feasible business model
Magify.design - An AI-powered tool that streamlines the design process for designers
Unicody - A user-friendly landing page builder that allows quick and easy creation and publication of landing pages
Talk to Books - A search engine that allows users to ask questions about books and get natural language responses
DALL-E Bro - A plugin for Figma and FigJam that utilizes OpenAI DALL-E 2 algorithm to generate images from text
Ask Abe - A helpful information assistant designed to answer legal questions based on the California Legal Code
Myth - AI poses a threat to national security
Fact - While the risks exist, AI's potential to bolster security outweighs concerns. AI enhances security by detecting cyber-attacks, analyzing data for threats, and aiding decision-making. It can assist in data analysis, surveillance, and threat detection, making it an asset, not a threat.
Can you help us, please?
Be our mentor and let us know your take on today's newsletter. Share your honest and detailed thoughts or just drop a Hi. YES! We personally read all the messages.