🤫Meta Using Chinese AI Now

PLUS: New AI Career Opportunities | CEOs Double Down On AI


Reading time: 5 minutes

🗞️In this edition

  • Tables turn as Meta takes cues from Alibaba's AI

  • Sponsored: MyKin -

  • Four new jobs AI will create in the years ahead

  • 68% of CEOs increasing AI budgets in 2026

  • In other AI news –

    • AI demand and tight supply push copper toward 12000

    • Gavin Newsom pushes back on Trump AI executive order

    • AI boom helps power European banks rally

    • 4 must-try AI tools

Hey there,

Meta’s open-source lead is slipping as Chinese models gain ground,the “new AI jobs” narrative is really about reshaping existing roles and CEOs are doubling down on AI spend despite weak returns.

We're committed to keeping this the sharpest AI newsletter in your inbox. No fluff, no hype. Just the moves that'll matter when you look back six months from now.

Let's get into it.

What's happening:

When US tech giant Meta Platforms released flagship Llama family of AI models in February 2023, they were open-sourced, singling it out among global AI model developers.

That September, one derivative of Llama was announced: Alibaba Cloud's Qwen. First generation of Qwen adopted Llama's training process and cited Meta's research findings. Chinese researchers even called Llama "the top open-source large language model."

Two years later, tables have seemingly turned. According to a Bloomberg report Wednesday, it was now Meta reportedly taking cues from Alibaba, with unnamed sources claiming the Facebook owner was using Qwen to help train a new model code-named Avocado.

Throughout 2023 and 2024, Chinese firms keen to catch up with the US used Llama to help build their own models. The Chinese industry's reliance on Llama was highlighted by experts as evidence of China lagging the US in AI.

While Alibaba had similarly been open-sourcing Qwen models since 2023, it always trailed Llama on key performance benchmarks. It took until January this year, when DeepSeek pushed Chinese AI to the forefront, that global adoption of both DeepSeek and Qwen's open-source models exploded.

Why this is important:

According to a US government report in September, downloads of DeepSeek and Qwen models on Hugging Face increased nearly 1,000 and 135%, respectively, in the first nine months of the year.

The two Hangzhou-based firms have led the way in helping China overtake the US in the global open-source model marketplace for the first time, according to a study published last month by Hugging Face and MIT.

Central to Meta's demise had been underwhelming performance of foundational models on key industry benchmarks. Llama 4 model release in April was widely seen as a failure as it didn't get close to the frontier of technology.

Meanwhile, third-party benchmark data shows Alibaba continuously improved strength of models over the past year while staying true to its commitment to continue open-sourcing.

Our personal take on it at OpenTools:

This is a complete reversal in two years.

2023: Chinese companies relied on Llama, cited it as "top open-source model," built derivatives. Experts pointed to Llama reliance as evidence China lagged the US.

2025: Meta reportedly using Qwen to train new models, Chinese downloads exceed US, Qwen scores higher on openness-intelligence index.

The Llama 4 failure was a turning point. "Widely seen as failure" and "didn't get close to the frontier" means Meta's open-source flagship couldn't compete with closed models from OpenAI, Anthropic, Google.

Meanwhile, Alibaba continuously improved Qwen while maintaining open-source commitment. DeepSeek's January breakthrough accelerated adoption of both DeepSeek and Qwen models globally.

This vindicates China's approach of algorithmic efficiency and open development over exclusive access to best hardware. They optimized models to run on available chips, open-sourced them, and now lead download share.

For Meta, this is a strategic disaster. Lost open-source leadership to China, Llama 4 disappointed, now reportedly using competitor's model to train next generation while considering abandoning open commitment that was their differentiation.

From Our Partner:

Meet KIN, Your Personal AI Advisory Board - MyKin.ai

Turn founder chaos into clear next steps.

Kin gives you a personal AI Advisory Board so you stay clear and steady across the five parts of your life: work, relationships, energy, communication, and personal growth.

  ✔ Stop rehearsing hard conversations in your head

  ✔ Say the thing you've been sitting on for weeks

  ✔ Make the call instead of spiralling on it

  ✔ Get home and actually be present

Feel the difference in 5 minutes

Talk to Kin like a real mentor and see how much lighter your evenings feel.

Try Kin for FREE on iOS or Android.

What's happening:

AI will make some jobs obsolete but also create new opportunities. As per experts, here are four possible jobs AI may create:

AI Explainer: An expert who understands AI deeply and translates it into plain language for managers, judges, regulators. Example: In a lawsuit involving an autonomous bus hitting a self-driving vehicle, the judge and jury need to understand if the software was up-to-date and whose fault it was. Parties might hire AI explainers to give expert testimony.

AI Chooser: Helps companies sort through a variety of AI systems and figure out what jobs each is best suited to handle, then guides purchase and installation. Example: Retailers wanting AI for multiple tasks need experts to recommend predictive AI for customer trends and generative AI for marketing materials.

AI Auditor and Cleaner: Auditor performs regular checkups to see if AI systems produced unfairly skewed results. Cleaner adjusts the system to eliminate bias by training with new data.

AI Trainer: Uses AI itself to figure out what teaching style works best for individuals and tailors lessons accordingly. Especially valuable for midcareer workers needing rapid skill acquisition without returning to formal education, or workers in smaller companies lacking extensive training resources.

Why this is important:

These aren't speculative blue-sky jobs. Companies already struggling with bias in AI decisions need auditors and cleaners now, not in future.

Courtroom testimony about AI systems is happening today in autonomous vehicle accidents, algorithmic discrimination cases, and patent disputes. AI explainers will become expert witnesses like forensic accountants or medical experts.

Our personal take on it at OpenTools:

These jobs are consultant categories, not new careers.

"AI explainer" is an expert witness or technical consultant. Those roles exist. Adding "AI" prefix doesn't create a new job category, it describes specialization within an existing profession (lawyer, engineer, consultant).

"AI chooser" is an enterprise software sales engineer or technical account manager. Helping companies evaluate and purchase technology is an established role. AI systems are just the newest category of software requiring this expertise.

"AI auditor and cleaner" are algorithmic fairness consultant and ML engineer. Auditing for bias and retraining models are existing job functions within AI/ML teams. Separating them into distinct roles might happen at scale, but that's organizational specialization, not a new career path.

"AI trainer using AI to personalize learning" is a corporate learning and development specialist with new tools. The job (employee training) is old. The method (AI-personalized) is new. That's technology adoption, not job creation.

The article conflates "new specializations within existing professions" with "new jobs AI creates." Those are different things. Forensic accounting emerged as a specialization within accounting. That's not "computers created forensic accounting jobs."

Real new jobs AI creates look different. Prompt engineer is a genuinely new role with no clear predecessor. Data labeler at scale (millions of people annotating training data) is a volume-driven job category AI created. YouTube content moderators didn't exist before platform scale required human review.

This is think-piece speculation, not labor market analysis. "Countless other new roles in industries that don't exist yet" is an unfalsifiable claim. Maybe true, maybe not. Can't evaluate without specifics.

What's happening:

68% of CEOs plan to increase AI spending in 2026 despite less than half of current projects generating positive returns, according to Teneo's survey of 350+ public company executives.

Companies are seeing success with AI in marketing and customer service but struggling with higher-risk applications like security, legal, and HR.

The expectations gap is massive: 53% of institutional investors expect AI returns within six months. But 84% of large-company CEOs (revenue $10B+) say it'll take longer than that.

Counterintuitively, 67% of CEOs believe AI will increase entry-level headcount, while 58% expect it to grow senior leadership roles.

Why this is important:

Trillions in AI investments aren't paying off yet, but spending's accelerating anyway. That's a bet on future value, not present returns.

The investor-CEO expectations mismatch is dangerous. If 53% of investors expect six-month payback and 84% of large-company CEOs know that's unrealistic, somebody's getting disappointed. Markets punish missed expectations.

67% expecting AI to increase entry-level hiring contradicts the automation narrative. 

CEOs either see AI creating new work categories or they're managing optics around workforce displacement.

Our personal take on it at OpenTools:

This is infrastructure spending disguised as product investment.

Less than half of projects delivering positive ROI means AI's still in the experimental phase for most enterprises. 

But 68% increasing spend anyway signals FOMO fear of falling behind competitors drives budgets more than proven ROI.

The six-month investor expectation is delusional. Enterprise AI projects take 12-18 months minimum from pilot to production value. Investors pricing in quick wins are setting up for correction.

This survey captures AI investment at an inflection point: everyone's spending, nobody's sure it's working, but stopping feels riskier than continuing.

  1. AgentGPT - An AI-powered tool that lets users deploy autonomous agents capable of completing a wide range of tasks from drafting emails to planning trips

  2. Vidnoz Headshot Generator - Allows users to create highly realistic AI-generated headshots within minutes

  3. BannerGPT - A tool that reads and comprehends your blog posts to generate compelling and relevant banner images

  4. TextCraft AI - An email management tool designed to improve productivity and streamline email communication

We're here to help you navigate AI without the hype.

What are we missing? What do you want to see more (or less) of? Hit reply and let us know. We read every message and respond to all of them.

– The OpenTools Team

How did we like this version?

Login or Subscribe to participate in polls.

Interested in featuring your services with us? Email us at [email protected]