- OpenTools' Newsletter
 - Posts
 - š¤Meta Loses $200B On AI
 
š¤Meta Loses $200B On AI
PLUS: Adobe's Video AI Leap | Soraās Free Ride Is Almost Over
Reading time: 5 minutes
šļøIn this edition
Meta loses $200B on AI uncertainty
Sponsored: i10X ā 500+ AI tools in one place
Adobeās sneak peek at the future of AI creativity
Soraās āunlimitedā dream now comes with a price tag
Workflow Wednesday #43 āAI in Actionā
In other AI news ā
Microsoft signs $9.7B AI cloud deal with Australiaās IREN
Trump says Nvidiaās Blackwell chips wonāt go to āother peopleā
China to host 2026 APEC summit in Shenzhen, push AI cooperation
4 must-try AI tools
Hey there,
Meta lost $200B in market cap after Zuckerberg couldn't explain what his $600B AI infrastructure spending will actually produce beyond vague promises about future products. Adobe demoed experimental tools that edit entire videos from a single frame and reshape lighting in photos after shooting. And OpenAI admitted Sora's economics are "completely unsustainable," launching paid credits while warning free video generation limits will drop soon.
We're committed to keeping this the sharpest AI newsletter in your inbox. No fluff, no hype. Just the moves that'll matter when you look back six months from now.
Let's get into it.
What's happening:
Meta's stock dropped 12% by Friday's close, wiping out over $200B in market cap after an earnings call where Mark Zuckerberg couldn't clearly explain what billions in AI spending will produce.
The company's building two massive data centers with reports indicating $600B in U.S. infrastructure spending over three years. Operating expenses jumped $7B year-over-year. Capital expenses hit nearly $20B quarterly.
When analysts pressed for specifics, Zuckerberg said the spending was just starting.
"The right thing to do is to try to accelerate this to make sure that we have the compute that we need," he said. "Our view is that when we get the new models we're building in MSL in there and get truly frontier models with novel capabilities, then I think this is just a massive latent opportunity."
Investors weren't reassured. Meta posted $20B in quarterly profit, but it's the first quarter where aggressive AI spending visibly impacted the bottom line. Aside from enormous data centers and well-compensated researchers, it wasn't clear what the money bought.
Meta's most powerful AI product is Meta AI assistant with over 1B active users, but those numbers are inflated by Facebook and Instagram's 3B users. It's not a ChatGPT competitor. Vibes video generator boosted daily users but has limited business impact. Vanguard smart glasses feel more like Reality Labs extension than LLM product.
When pressed on infrastructure spending, Zuckerberg pointed to the next generation, not recent launches.
"It's not just Meta AI as an assistant. We expect to build novel models and novel products, and I'm excited to share more when we have it," he said, adding details would come "in the coming months."
Why this is important:
OpenAI spends similarly but has $20B annual revenue and the fastest-growing consumer service in history. Google and Nvidia spent big and had great quarters.
Meta's spending the same with no comparable product.
It's been four months since Zuckerberg restructured AI teams. The Superintelligence team hasn't launched anything earthshaking yet. But as Meta spends billions to stay competitive, there's no clear indication what role Zuckerberg wants Meta playing in AI.
Will Meta AI use detailed personal data to become a ChatGPT competitor? Is Vibes the start of a consumer entertainment play? Are "business AI" references hinting at enterprise strategy?
Nobody knows. The market response shows that answer's wearing thin.
Our personal take on it at OpenTools:
Wall Street's asking the right question: what's the product?
Zuckerberg's betting $600B on infrastructure before defining what he's building. That's backwards from how product companies operate.
OpenAI can justify spending because ChatGPT has 800M weekly users generating $20B annually. Meta AI has 1B monthly users but most stumbled into it through Instagram. That's not product-market fit.
The timing's awkward. Superintelligence Lab launched four months ago. Expecting a killer product already is unrealistic. But announcing massive spending before you can articulate strategy invites this reaction.
Meta's core business prints money. $20B quarterly profit means they can afford to burn cash on AI longer than most. But "we're excited about future products" doesn't fly when you're asking investors to trust $600B in spending.
The contrast with competitors is stark. Google's spending pays for itself through cloud revenue. Nvidia's selling picks and shovels. OpenAI's revenue is growing exponentially. Meta's spending with no visible return path.
Either Zuckerberg knows something investors don't and the products are coming, or Meta's building infrastructure searching for a use case. One of those scenarios is visionary. The other's wasteful.
The $200B market cap loss suggests investors think it's the latter until proven otherwise.
From Our Partner:
i10X.ai is your all-in-one AI workspace. Instead of paying $20+ per model across multiple websites, i10X gives you unified access to leading models like ChatGPT, Claude, Perplexity, Gemini, and more - starting from just $8. You also get free access to over 500 specialized AI tools for image generation, document drafting, marketing automation, productivity, and more. No switching tabs. No bloated costs. Just seamless, curated AI - all in one spot.
Unified access to top LLMs (ChatGPT, Claude, Gemini, etc.)
Access free expert AI tools for content, design, legal, and marketing
Cross-model and agent memory
Compare model performance in our Chat Arena
What's happening:
Adobe demonstrated experimental AI tools at its Max conference providing new ways to edit photos, videos, and audio. These "sneaks" include tools that instantly apply changes from one frame across entire videos, manipulate light in images, and correct mispronunciations in audio.
Project Frame Forward allows video editors to add or remove anything without masks, the time-consuming process of selecting objects or people. Adobe's demo showed Frame Forward identifying, selecting, and removing a woman in the first frame, replacing her with a natural-looking background. The removal automatically applied across the entire video in a few clicks.
Users can insert objects by drawing where they want them and describing additions with AI prompts. Changes apply across the whole video. Inserted objects are contextually aware, showing a generated puddle reflecting movement of a cat already in the video.
Project Light Touch uses generative AI to reshape light sources in photos. It changes lighting direction, makes rooms look illuminated by lamps that weren't on, and lets users control light and shadow diffusion. Users can insert dynamic lighting that bends around people and objects in real time, like illuminating a pumpkin from within and turning surrounding environment from day to night. Light source colors are adjustable for warmth or vibrant RGB effects.
Project Clean Take changes speech enunciation using AI prompts, removing need to re-record. Users can change delivery or emotion, making voices sound happier or inquisitive, or replace words while preserving the speaker's voice characteristics. It automatically separates background noises into individual sources so users can selectively adjust or mute specific sounds.
Other sneaks include Project Surface Swap for changing material or texture, Project Turn Style for rotating objects like 3D images, and Project New Depths for editing photographs as 3D spaces.
Why this is important:
Sneaks aren't publicly available and aren't guaranteed to become features. But many Adobe tools like Photoshop's Distraction Removal and Harmonize started as sneaks projects.
Frame Forward solves video editing's biggest bottleneck. Removing objects from video currently requires frame-by-frame masking. Doing it automatically from a single frame change is massive productivity gain.
The contextual awareness, showing puddles reflecting movement of existing objects, suggests the AI understands scene physics, not just pixel manipulation.
Project Light Touch is reshaping photography fundamentals. Being able to relight scenes after shooting changes what's possible without studio equipment.
Project Clean Take fixes audio without re-recording. That's valuable for podcasters, video producers, and anyone who's ever needed to fix a single mispronounced word in otherwise perfect take.
Our personal take on it at OpenTools:
Frame Forward is the standout.
Video editing is exponentially more time-consuming than photo editing because changes require propagating across hundreds or thousands of frames. Doing it automatically from single frame eliminates that multiplier.
The question is which sneaks ship. Adobe shows dozens yearly. Some become features, many don't. The impressive demos don't guarantee production-ready tools.
But if even half of these ship, Adobe's defending its position against AI-native competitors like Canva. Traditional creative software enhanced with AI beats AI tools trying to build creative software from scratch.
What's happening:
OpenAI now lets users buy extra credits to generate more AI videos on Sora. The move comes as the company warns it expects to reduce free allowances in the future.
Bill Peebles, who leads the Sora team, said the video platform's economics are "currently completely unsustainable."
Power users "clearly" aren't satisfied with free generations they get daily: 100 for Pro model users, 30 for everyone else.
"We're going to let creators get as much usage as they want to pay for," Peebles said.
Ten extra video generations cost $4, according to Sora's App Store listing. The number of credits used per video depends on "length, resolution, and other factors," per OpenAI's support page. Credits last 12 months and work on OpenAI's coding platform Codex.
Peebles warned people will hit limits sooner in the future. "Eventually we will need to bring the free gens down to accommodate growth. In the meantime, enjoy the crazy usage limits." He didn't provide details but said OpenAI will "be transparent as it happens."
The decision comes amid broader push to monetize and scale Sora while cultivating an AI-powered creator economy. OpenAI's been adding features like clip stitching, leaderboards for popular videos, and cameos, its legally contentious feature letting users create deepfake avatars of themselves, others, and original characters that other creators can use.
Sora's piloting monetization for creators "soon," Peebles said, imagining a "world where rightsholders have the option to charge extra for cameos of beloved characters and people."
The feature's been expected given the company's efforts to move from its original hands-off copyright approach, which flooded the platform with questionable depictions of fictional characters like Pikachu and SpongeBob and "disrespectful" deepfake videos of public figures like Martin Luther King Jr.
Why this is important:
Sora's economics being "completely unsustainable" four months after launch reveals the cost structure of video generation.
Video is exponentially more expensive than text or images. Each second of video requires generating dozens of frames with temporal consistency. At scale, that compute cost is massive.
The $4 for 10 videos pricing gives us signal on costs. If OpenAI's charging that much while still losing money, the actual cost per video generation is likely higher. Compare that to ChatGPT text generation, which costs pennies.
Reducing free allowances while introducing paid credits is classic SaaS playbook: hook users with generous free tier, then squeeze once they're dependent.
The creator monetization pilot is interesting. Letting rightsholders charge for character cameos creates a marketplace for AI-generated content featuring copyrighted characters. That could be huge revenue or massive legal liability depending on execution.
Our personal take on it at OpenTools:
"Completely unsustainable" is admission OpenAI didn't nail unit economics before launch.
They released Sora with generous free tiers to drive adoption, then realized the compute costs don't work. Now they're walking it back while users are hooked.
The $4 for 10 videos pricing feels arbitrary. Without knowing length, resolution, and quality settings, users can't predict costs. That's frustrating UX.
Reducing free limits "eventually" without specifics creates anxiety. Users don't know if their workflow will still work next month. That uncertainty kills adoption for serious creators.
The creator monetization angle is where this gets interesting or dangerous. If OpenAI successfully creates a marketplace where rightsholders license characters for AI video generation, that's genuinely novel. If they get sued into oblivion for facilitating copyright infringement, it's Napster 2.0.
The MLK deepfake problem was predictable. Hands-off approach to controversial content always ends with platform scrambling to implement moderation after backlash.
Sora's caught between being infrastructure for creators and being a consumer product. Infrastructure needs predictable pricing. Consumer products need generous free tiers. Can't optimize for both.
This Week in Workflow Wednesday #43: AI in Action ā Real-World Workflow Transformations
This week, Iāll show you how to use ProWritingAid to take customer-facing text and actually see where readers might lose attention. Itās like running a usability testāon your writing.
Workflow #1: Transform Customer Communication with AI-Powered Writing (ProWritingAid, free trial).
Step 1: Sign up, upload your text, or paste it straight into the editor.
Step 2: Head to the top bar and click the Summary Report (4th from the left). Youāll see a breakdown across grammar, readability, sticky sentences, engagement scoreābasically the hotspots where readersā¦We dive into this ProWritingAid workflow and 2 more real-world AI transformations in this weekās Workflow Wednesday.
Microsoft Signs $9.7 Billion AI Cloud Deal With IREN ā IREN, along with companies like CoreWeave, Nebius Group NV, Crusoe and Nscale, is a member of the group of so-called neoclouds (data center operators that specialize in AI) that are vying to provide computing power to large hyperscalers like Meta and AI companies such as OpenAI.
Trump says Nvidiaās Blackwell AI chip not for āother peopleā ā The possibility that Blackwell chips might be sold to Chinese firms has drawn criticism from China hawks in Washington.
China to host 2026 Apec meeting in Shenzhen, āvigorouslyā push AI cooperation ā President Xi Jinping makes the announcement as world leaders wrap up this yearās summit in South Korea.
š©š¼āšDiscover mind-blowing AI tools
Chaindesk - A no-code platform that allows users to create custom AI chatbots trained on their own data
Iconik AI - A tool that helps users generate stunning app icons for Android, iOS, and web apps without any design skills
Fill3d - A virtual staging tool that brings your empty room to life with photorealistic furniture
Jason AI - A tool for automating B2B conversations and bookings
We're here to help you navigate AI without the hype.
What are we missing? What do you want to see more (or less) of? Hit reply and let us know. We read every message and respond to all of them.
ā The OpenTools Team
How did we like this version? | 
Interested in featuring your services with us? Email us at [email protected]  | 




