💸Apple Pays Google $1B For AI Siri

PLUS: xAI's Employee Data Grab + New Podcast Episode

Reading time: 5 minutes

🗞️In this edition

  • Apple relies on Google for Siri rebuild

  • Sponsored: Kosmik – Create moodboards in seconds with AI

  • Elon Musk's company forced staff to train AI companion

  • Ask the Experts: Episode 1 — Why 90% of AI Projects Fail — and What Successful Ones Do Differently

  • Workflow Wednesday #44: AI for Decision-Makers

  • In other AI news –

    • Stability AI wins Getty lawsuit, leaving copyright rules unclear

    • Google Maps gets Gemini AI for chat-based navigation help

    • China bans foreign AI chips from government data centers

    • 4 must-try AI tools

Hey there,

Apple's finalizing a $1 billion annual deal with Google to use Gemini for Siri's rebuild, admitting it fell too far behind in AI to deliver alone. Elon Musk's xAI forced employees to submit biometric data to train its Ani chatbot, telling staff it was "a job requirement" despite concerns about deepfakes and sexual content. And we're dropping a special new podcast episode with AI Expert Thomas Blichfeldt-Moltke, breaking down why 90% of AI projects fail and what the 10% that succeed do differently.

We're committed to keeping this the sharpest AI newsletter in your inbox. No fluff, no hype. Just the moves that'll matter when you look back six months from now.

Let's get into it.

What's happening:

Apple is planning to use a 1.2 trillion parameter AI model developed by Google's Alphabet to power its Siri voice assistant overhaul, according to people with knowledge of the matter.

The iPhone maker is banking on Google's help to rebuild Siri's underlying technology, setting stage for new features next year. 

Apple tested Gemini, OpenAI's ChatGPT, and Anthropic's Claude before zeroing in on Google earlier this year. The hope is using the technology as interim solution until Apple's own models are powerful enough.

The two companies are finalizing an agreement that would see Apple pay roughly $1B annually for access to Google's technology. The new Siri is on track for next spring.

The custom Gemini system represents major advance from the 150B parameter model used today for cloud-based Apple Intelligence. The move would vastly expand the system's power and ability to process complex data and understand context.

Known internally as Glenwood, the effort to fix Siri with third-party model has been led by Vision Pro creator Mike Rockwell and software engineering chief Craig Federighi. The new voice assistant, planned for iOS 26.4, is code-named Linwood.

Under the arrangement, Google's Gemini will handle Siri's summarizer and planner functions, the components that help the assistant synthesize information and decide how to execute complex tasks. Some Siri features will continue using Apple's in-house models.

The model will run on Apple's Private Cloud Compute servers, ensuring user data remains walled off from Google's infrastructure.

Why this is important:

Apple paying Google $1B annually for AI infrastructure is admission Apple fell behind.

The company that positioned itself as privacy-first and vertically integrated is now outsourcing the core intelligence behind Siri to its biggest competitor. That's strategic necessity, not choice.

1.2 trillion parameters versus 150B currently used is 8x jump in model complexity. That's not incremental improvement. It's fundamental architecture change.

Running Gemini on Apple's Private Cloud Compute servers maintains privacy narrative while admitting Apple can't build equivalent models. The technical compromise lets Apple preserve brand positioning while acknowledging technical deficit.

This differs from the Safari-Google search deal. That's user-facing partnership Apple discloses. This is infrastructure Apple plans keeping quiet. That reveals shame about needing Google's help.

Our personal take on it at OpenTools:

This is Apple's "oh shit" moment on AI.

Siri's been a joke for years. Competitors shipped capable AI assistants. Apple kept promising upgrades. Now they're admitting they can't build it alone and need Google's help.

$1B annually is meaningful money even for Apple. That's not licensing fee for nice-to-have feature. That's paying competitor to provide core functionality Apple can't deliver.

The 8x parameter jump from 150B to 1.2T shows how far behind Apple fell. Their current models aren't in same weight class as frontier systems.

Running on Private Cloud Compute is fig leaf. Yes, data doesn't touch Google servers. But Google built the model. Google controls updates. Apple's dependent on competitor's roadmap.

The "interim solution until Apple's models are powerful enough" narrative is face-saving. Maybe Apple ships competitive 1T parameter model next year. More likely, Google keeps advancing and Apple stays one generation behind, extending "interim" indefinitely.

Apple bleeding AI talent while trying to build catch-up models is concerning. Talent leaves when they see company falling behind and leadership can't articulate credible path forward.

Bottom line: Apple lost AI race and is paying Google $1B/year for participation trophy.

From Our Partner:

Bring your ideas to life without the chaos of tabs, folders, or file names.


Kosmik’s AI pulls the right images, videos, and notes straight into one clean workspace with just typing a few words — so you can focus on creating, not collecting.

 Work faster. - Find the perfect assets in seconds.
 🧠 Stay organized. - Everything you drop in is auto-tagged and categorized.
 🤝 Collaborate easily. - Share boards, gather feedback, and move projects forward together.

For Creatives, Designers, and Visual Thinkers

From first spark to finished concept, Kosmik makes creativity flow.
Try it free

What's happening:

Elon Musk's xAI compelled employees to submit their own biometric data to train its "Ani" female chatbot, according to a recent report.

Ani, an anime avatar with blond pigtails and an NSFW setting, was released over summer for users who subscribe to X's $30-a-month SuperGrok service. 

At an April meeting, xAI staff lawyer Lily Lim told employees they would need to submit biometric data to train the AI companion to be more human-like in its interactions with customers.

Employees assigned as AI tutors were instructed to sign release forms granting xAI "a perpetual, worldwide, non-exclusive, sub-licensable, royalty-free license" to use, reproduce, and distribute their faces and voices, as part of a confidential program code-named "Project Skippy." The data would be used to train Ani, as well as Grok's other AI companions.

Some employees balked at the demand, concerned their faces or likeness could be sold to other companies or used in deepfake videos. Employees were put off by the chatbot's sexual demeanor and its likeness to a waifu. But they were told data collection was "a job requirement to advance xAI's mission."

Why this is important:

xAI compelling employees to submit biometric data as job requirement raises serious ethical and legal questions.

The "perpetual, worldwide, non-exclusive, sub-licensable, royalty-free license" language means xAI can use employee faces and voices forever, anywhere, and can sublicense to third parties. Employees concerned about their likenesses being sold to other companies or used in deepfakes had valid reasons to object.

Making biometric submission a job requirement creates coercive environment. Employees face choice between keeping their jobs or maintaining control over their biometric data and likeness.

The sexual nature of Ani makes this worse. Employees' faces and voices training a chatbot described as "phone sex line" without meaningful consent is ethically problematic.

"Project Skippy" being confidential suggests xAI knew this would be controversial. Internal code names for sensitive programs indicate company awareness of public relations risk.

Our personal take on it at OpenTools:

This is dystopian even by tech industry standards.

Forcing employees to sign over perpetual rights to their biometric data as job requirement is coercive. "Sign this or you're not advancing our mission" isn't consent, it's workplace pressure.

The perpetual, sublicensable license is predatory. xAI can use employee likenesses forever, sell them to other companies, and employees have no recourse. That's not standard employment terms.

Training a sexual chatbot with employee biometric data without genuine consent crosses obvious ethical lines. Employees objecting to their faces and voices being used for "phone sex line" functionality had legitimate concerns management dismissed.

The deepfake concerns are valid. Once biometric data is captured, preventing misuse is nearly impossible. xAI's sublicense rights mean employee likenesses could end up anywhere.

This reflects broader pattern of tech companies treating employee data and privacy as expendable in pursuit of AI development. The ends don't justify these means.

We kicked off our new series Ask the Experts by sitting down with Thomas Blichfeldt-Moltke, founder of Sunrai and one of Denmark’s top AI consultants.

Thomas has built everything from reinforcement learning models to large-scale recommendation systems (including the one behind Too Good To Go). In this episode, he breaks down:

  • Why most AI projects fail long before the code is written

  • The process mistakes that sink AI adoption inside big companies

  • Why “we need AI” isn’t a strategy — and what to focus on instead

  • How to spot the difference between hype and real business impact

It’s a grounded, behind-the-scenes look at AI implementation from someone who’s actually shipped it.

🔗 Watch the full episode: Watch Here
👤 Follow Thomas: LinkedIn
🏢 Learn more about Sunrai: LinkedIn

This Week in Workflow Wednesday #44: AI for Decision-Makers – Workflows for Leaders & Execs

This week, I’ll show you how to use Perplexity to turn hours of competitor research into a 5-minute executive briefing. Think of it as a strategy analyst that never sleeps.

Workflow #1: Out-Research Your Competition with Perplexity (Perplexity Pro)
Step 1: Pick a competitor or market you want to monitor — like “Jasper AI” if you’re in the AI writing space.
Step 2: Ask Perplexity: “Give me a competitive……..We dive into this Perplexity workflow and two more AI-powered decision-making systems in this week’s Workflow Wednesday.

  1. Misgif - An app that uses AI to allow users to insert themselves into their favorite GIFs, TV shows, and movies using a simple selfie

  2. Lama Cleaner - An AI-powered image inpainting tool designed for correcting and enhancing images

  3. Detangle AI - A platform that helps users understand and simplify legal documents by providing summaries, explanations, and highlighting key issues

  4. Altered Studio - An audio editor that combines multiple Voice AI tools in an easy-to-use app, perfect for creating high-quality voice content for podcasters, video games, eLearning, etc

We're here to help you navigate AI without the hype.

What are we missing? What do you want to see more (or less) of? Hit reply and let us know. We read every message and respond to all of them.

– The OpenTools Team

How did we like this version?

Login or Subscribe to participate in polls.

Interested in featuring your services with us? Email us at [email protected]