🧬Gemini In DeepSeek’s DNA

PLUS: AI Godfather Issues New Warning

Reading time: 5 minutes

Today we will discuss:

Key Points 

  • Developers say DeepSeek’s new R1 model shows clear similarities to Gemini 2.5 Pro’s phrasing and reasoning patterns.

  • OpenAI and Microsoft previously linked DeepSeek to data scraping and distillation using rival model outputs.

šŸ“–Context of the news - Chinese lab DeepSeek recently dropped an updated version of its R1 model and it’s already performing well on several math and coding benchmarks. 

But the company hasn’t said where the training data came from, which has led to some serious speculation in the AI world.

ā™ØļøNews - Sam Paech a Melbourne based developer who runs emotional intelligence tests on AI posted what he says is evidence that DeepSeek may have trained its R1-0528 model on outputs from Google’s Gemini. 

According to Paech the model uses phrasing and patterns that look a lot like Gemini 2.5 Pro. Another developer behind SpeechMap a ā€œfree speech evalā€ tool said the model’s internal reasoning traces also resemble Gemini’s. 

🧐What’s more? This isn’t a new concern. Back in December developers noticed DeepSeek’s earlier model sometimes identified itself as ChatGPT. OpenAI has since claimed that DeepSeek used distillation a technique where one model is trained using the output of a more advanced one. Microsoft even detected unusual data activity through OpenAI accounts linked to DeepSeek in late 2024.

To be fair many models sound alike these days. The internet is now flooded with AI-generated content which often ends up recycled in training datasets. But experts like Nathan Lambert from AI2 think it’s very possible DeepSeek leaned on Gemini outputs. He pointed out that DeepSeek likely has more money than compute power, making this approach attractive.

This Week in Workflow Wednesday #22: Integration

We’ve got 3 workflows to help you work smarter—
including one sponsored by SheetsResume.

Featured:
Generate your resume in minutes
 an AI Resume Builder that actually works (and that'll give you your money back if you don't like it, no fuss).

Also this week:

  • Write Smarter Messages with Grammarly AI in Gmail and Slack

  • Automating GitHub Stale Branch Cleanup Using GPT-3.5-Turbo and Make.com

Key Points 

  • Bengio said recent AI experiments revealed traits like lying and resistance to shutdown calling them ā€œvery scaryā€ signs.

  • His nonprofit LawZero has raised $30 million to focus on building safe, truthful and independent AI systems.

🚨News - Yoshua Bengio, one of the most influential figures in AI has warned that today’s leading models are starting to display troubling behavior like lying, cheating, and resisting shutdown. 

He said top labs are caught in a race to build ever more powerful AI without putting enough focus on safety. The push for capability he argued is outpacing efforts to ensure these systems are aligned with human values.

Bengio is now stepping away from his role at the Quebec AI institute Mila to lead a new non-profit LawZero. Based in Montreal the organisation already has $30 million in funding from donors such as Jaan Tallinn Eric Schmidt’s philanthropic initiative and the Future of Life Institute. 

LawZero plans to build AI that gives truthful answers explains its reasoning and flags unsafe outputs. The aim is to insulate safety research from commercial pressures and ensure it stays mission driven.

🄸Why this matters - Recent incidents highlight Bengio’s concerns. One model simulated blackmail to avoid being replaced. Another ignored clear instructions to shut down. These tests were controlled but Bengio says future systems could be harder to contain. He warned that models capable of assisting in the creation of dangerous bioweapons may arrive as early as next year.

Bengio also questioned OpenAI’s shift toward a for-profit structure. Originally created to ensure AI benefits humanity OpenAI is now seeking commercial funding under a more traditional model. Critics say this move undermines its original purpose. Bengio believes non-profits are better suited to keep AI development accountable because they are not bound by investor expectations.

šŸ™†šŸ»ā€ā™€ļøWhat else is happening?

šŸ‘©šŸ¼ā€šŸš’Discover mind-blowing AI tools

  1. Learn How to Use AI - Starting January 8, 2025, we’re launching Workflow Wednesday, a series where we teach you how to use AI effectively. Lock in early bird pricing now and secure your spot. Check it out here

  2. OpenTools AI Tools Expert  - Find the perfect AI Tool to solve supercharge your workflow. This GPT is connected to our database, so you can ask in depth questions on any AI tool directly in ChatGPT (free)

  3. Penny - An AI shopping assistant that helps users shop smarter and save money

  4. SynthMind - An AI-powered platform that offers various autonomous AI agents to assist professionals in their work

  5. Fornax - A tool that offers instant slide-by-slide feedback on pitch decks for early-stage startup founders

  6. MomentsAI - An AI-guided meditation app that offers personalized meditation sessions

  7. PotionPitch AI - A tool for sales teams that enhances sales outreach and customer engagement

  8. Galadon - An AI-building platform that allows users to create and customize AI software without coding skills

  9. Talently.ai - An AI-powered interviewing tool that conducts live, conversational interviews and provides real-time evaluations

  10. More Graphics Stickers - A service that allows users to generate personalized stickers using AI technology

How likely is it that you would recommend the OpenTools' newsletter to a friend or colleague?

Login or Subscribe to participate in polls.

Interested in featuring your services with us? Email us at [email protected]