- OpenTools' Newsletter
- 🦹🏻♂️OpenAI's baby-proofing agenda: What they're not telling you
🦹🏻♂️OpenAI's baby-proofing agenda: What they're not telling you
1 Free Newsletter > $200k Marketing Degree
We all agree marketing is the lifeblood of every business.
But are you wondering which new customer acquisition channels brands are finding success with? Trying to figure out how to create landing pages that actually convert clicks into customers? Debating on taking Tik Tok seriously or not?
Well more than 72,000+ world class CEOs, Founders, & Marketers are getting the answers to all of those questions and more by simply reading Growth Daily for 5 minutes every morning!
Dubbed “The WSJ of marketing” by its readers, Growth Daily delivers the most impactful news, tips, tools, and insights for all things business growth!
Reading time: 5 minutes
Today we will discuss-
🚸OpenAI forms team to focus on child safety
👨🏻🔧Singapore builds own version of ChatGPT
😳AI giants get fussy over UK safety tests
🤩11 fantastic AI tools you might not have heard of
All this and more - Let's dive in!
👩🍳What’s cooking in the newsroom?
OpenAI forms new team to study child safety and prevent misuse of AI tools
👨🏻💻News - OpenAI has created a new team to explore ways to prevent harm from children misusing or abusing its AI tools.
The news of a Child Safety team surfaced shortly after the company posted a new job listing related to this matter on its career page.
🧐What's been happening behind the scenes? According to OpenAI, the team has already been working with platform policy, legal, and investigations groups within OpenAI, as well as external partners, to handle "processes, incidents, and reviews" related to underage users.
The firm is now looking to hire a child safety enforcement specialist who will enforce OpenAI's policies regarding AI-generated content and participate in reviewing processes concerning "sensitive" content, likely related to children.
🤨Is this truly a genuine effort though? Last summer, schools and colleges banned ChatGPT over fears of plagiarism and misinformation. While some have since lifted their bans, doubts remain about GenAI's beneficial potential. Moreover, surveys like the one conducted by the U.K. Safer Internet Centre indicate that over half of children (53%) have observed peers using GenAI in negative ways, such as creating believable false information or images to upset others.
So, while the creation of this new team, amid scrutiny from activists and parents, suggests that OpenAI is becoming more mindful of minors' AI usage, it could also imply that the company is simply trying to protect itself from negative publicity.
Singapore develops ChatGPT-like model to represent Southeast Asians
🌏News - The Singapore government has created a Southeast Asian Language Model (LLM), the first in a family of models called SEA-LION - Southeast Asian Languages in One Network.
The initiative aims to help Southeast Asian users who often get incorrect or confusing results when using large language models like Meta's Llama 2 and Mistral AI in their native languages. It is especially important as the language barrier may hinder participation in an increasingly AI-dependent world.
🕵🏻♂️Why is it better for locals? Apart from the obvious fact that it is trained on data in 11 Southeast Asian languages including Vietnamese, Thai, and Bahasa Indonesia, the open-source model offers a more economical and effective option for the region's businesses, governments, and academia.
Moreover, using local LLMs instead of Western ones helps promote technological independence, improves privacy for locals, and better fits with regional interests.
The underlying issue - Although there are over 7,000 languages spoken worldwide, major LLMs such as Open AI’s GPT-4 and Meta’s Llama 2 have primarily been designed and trained for the English language.
Therefore, the burden falls on governments to develop LLMs that support local languages, allowing local populations to take part in the global AI economy, which is currently dominated by the same large tech companies that have failed to address the needs of non-native English speakers.
Top AI companies push back on UK government's safety tests
☕News - Major AI companies have urged the UK government to accelerate safety testing for their systems and provide clarity about what these tests hope to achieve.
For context, major companies like Google, DeepMind, Meta, Microsoft, and OpenAI agreed in November at the UK's AI Safety Summit to allow their software to be evaluated by the UK’s new AI Safety Institute (AISI). However, they are unhappy with the current pace and transparency of the evaluation process.
🙃What exactly are they unhappy about? The companies are pushing back against the AISI evaluation due to a lack of clarity regarding the specifics of the tests being conducted, their duration, and the feedback mechanism. It's also unclear if the testing will be required for every slight update to the model, a potential burden that AI developers may find challenging to manage.
🤯What's the most surprising aspect of this situation? According to an AISI spokesperson, the institute will share its findings with developers and expect them to address any identified risks before launching. Here, it is interesting to note that even though companies are willing to make adjustments if flaws are found, they are not legally required to modify models or postpone releases based on test results, as participation in the testing process is voluntary.
The situation thus highlights the problems with relying only on voluntary agreements to control fast-paced tech development.
👩🏼💻What else is happening?
In 2017, an algorithm wrote an entire Harry Potter chapter titled "Harry Potter and the Portrait of What Looked Like a Large Pile of Ash," after it was fed text from Rowling's best-selling book series.
In 2018, the city of Las Vegas' Innovation District launched the first completely autonomous electric shuttle on a public roadway in the US.
OpenAI, in the year 2019, developed a robotic hand called Dactyl that was capable of solving a Rubik's Cube.
👩🏼🚒Discover mind-blowing AI tools
Melon - Helps users capture and organize insights from various sources, such as podcasts, articles, and videos
Supademo - An interactive demo platform that helps companies create engaging demos and guides
Adori Labs - A platform that helps bloggers convert their written content into engaging videos using AI
VoiceSense - A tool that uses cutting-edge NLP technology to turn articles into audio
Icetana - An AI software that uses machine learning to detect unusual or interesting events in large surveillance networks
CandyIcons - Offers a simple three-step process to generate unique icons based on keywords and preferences
Defog - An AI-powered data analysis tool that allows users to ask questions and receive answers from their own datasets
Masterdebater - An AI tool designed to facilitate debates on various topics
Grantable - An AI-powered grant writing assistant that helps users write grant proposals faster
OppenheimerGPT - A MacOS menubar app that allows users to access both chatGPT and Bard simultaneously
ToDo.is - An AI-powered task management app that helps users stay organized and productive
Last Week in AI Podcast - It is a famous podcast that features insightful discussions and interviews with real AI researchers and is popular for decoding genuine and authentic AI news from clickbait headlines through short and engaging episodes.
A.I. Nation -The podcast explores the current impact of AI, machine learning, and predictive analytics on our daily lives. Some of the topics covered include - "Biased Intelligence," "The Next Pandemic," and "A.I. in the Driver's Seat."
Women in AI - It is a biweekly podcast that features the leading female experts in AI, Machine Learning, and Deep Learning. Some of the previous episodes have covered intriguing topics such as "The Future of AI & Spoken Word Content," "Implementing Responsible AI," and "Exploring Trustworthy & Ethical AI.”
Can you help us, please?
Be our mentor and let us know your take on today's newsletter. Share your honest and detailed thoughts or just drop a Hi. YES! We personally read all the messages.