- OpenTools' Newsletter
- Posts
- 😵💫Big Tech’s Future Under EU AI Act
😵💫Big Tech’s Future Under EU AI Act
PLUS: Google New Gemma Models
Reading time: 5 minutes
Key Points
The EU's AI Act will roll out gradually, with prohibitions on certain AI systems starting in February 2025 and new rules for general-purpose AI models beginning in August 2025.
By August 2026, new regulations will target high-risk AI systems, with deadlines extending to August 2027 for products under EU health and safety laws and August 2030 for AI used by public authorities.
National regulatory authorities will monitor compliance across the 27 member states, with potential fines up to 35 million euros or 7% of global revenue for noncompliance, focusing heavily on Big Tech companies.
☕News - The European Union’s Artificial Intelligence Act officially takes effect on 1 August, following its publication in the Official Journal of the EU on 12 July. This new legislation is a big step toward regulating AI in the EU, and it's important for various industries to understand how it will be phased in and what it involves.
⏭️Here’s what to expect next:
Under the AI Act’s implementation scheme, the legislation will be introduced gradually, similar to the EU’s approach to the Markets in Crypto-Assets Regulation, allowing organizations time to adjust and comply.
1 August marks the beginning of a countdown for the Act’s practical rollout, with major milestones planned for 2025 and 2026.
The first major update, which will be the “Prohibitions of Certain AI Systems,” is set for February 2025. This will ban AI applications that exploit individual vulnerabilities, engage in non-targeted scraping of facial images from the internet or CCTV footage, and create facial recognition databases without consent.
Following this, general-purpose AI models will have a new set of requirements implemented in August 2025. These AI systems are made to handle various tasks rather than being used for unique and specific purposes, such as image identification.
By August 2026, new rules will apply to high-risk AI systems with transparency issues. If these systems are part of products covered by EU health and safety laws, like toys, the rules will be enforced by August 2027. For high-risk AI used by public authorities, compliance will be required by August 2030, regardless of any design changes.
🥸What's more? The EU will set up national regulatory authorities in each of the 27 member states to oversee compliance. These authorities will have the power to conduct audits, request documentation, and enforce corrective measures. The European Artificial Intelligence Board will coordinate to make sure the rules are applied consistently across the EU.
Companies working with AI will need to meet standards in risk management, data governance, transparency, human oversight, and post-market monitoring. Big Tech companies are expected to be heavily targeted under these new regulations.
Noncompliance with the AI Act could lead to hefty fines, up to 35 million euros or 7% of the company’s total global annual revenue, whichever amount is higher.
Key Points
Google claims its new models, Gemma 2 2B, ShieldGemma, and Gemma Scope, are “safer,” “smaller,” and “more transparent” compared to others.
Gemma 2 2B is for text generation and analysis, ShieldGemma filters harmful content, and Gemma Scope helps developers understand the model's inner workings.
👨🏻💻News - Google has introduced three new generative AI models that it claims are “safer,” “smaller,” and “more transparent” than most others. These new models are additions to the Gemma 2 family, which was first introduced in May.
For context, Google’s Gemma series is different from their Gemini models in that Gemma is open-source and meant to build goodwill with developers, while Gemini’s source code isn’t public.
Coming back on topic, these new models—Gemma 2 2B, ShieldGemma, and Gemma Scope—all focus on safety but are designed for different use cases.
👨🏻🏫Here's what they are -
Gemma 2 2B: A lightweight model for generating and analyzing text that runs on various hardware, such as laptops and edge devices. It's available for certain research and commercial applications and can be downloaded from places like Google’s Vertex AI model library, Kaggle, and Google’s AI Studio toolkit.
ShieldGemma: A set of safety classifiers designed to detect and filter out harmful content like hate speech, harassment, and explicit material. It can filter both the prompts to a generative model and the content it generates.
Gemma Scope: A tool that helps developers understand the inner workings of a Gemma 2 model by breaking down complex information into more interpretable forms, providing insights into how the model processes information and makes predictions.
🙆🏻♀️What else is happening?
👩🏼🚒Discover mind-blowing AI tools
OpenTools AI Tools Expert GPT - Find the perfect AI Tool to solve supercharge your workflow. This GPT is connected to our database, so you can ask in depth questions on any AI tool directly in ChatGPT (free w/ ChatGPT)
ComfyUI Web - Allows users to generate realistic and high-resolution images from text descriptions (Free)
TubeOnAI - A platform that provides instant audio and text summaries for YouTube videos and podcasts ($20/month)
ReachInbox - An email outreach tool that helps businesses maximize their outreach potential ($39/month)
Vert - A website builder and lead management suite designed for small businesses ($4/month)
Lutra AI - A platform that allows users to easily build AI workflows and apps to automate repetitive tasks ($29/month)
Practina AI - A marketing automation platform that helps businesses with digital marketing ($50/month)
Pix2Pix Video - A tool that allows users to upload a short video clip and provide text instructions of how they'd like to see that video changed (Free)
ThreadBois - An online tool that helps users create viral thread headers (Free)
How likely is it that you would recommend the OpenTools' newsletter to a friend or colleague? |