- OpenTools' Newsletter
- Posts
- Stanford, China, and the Next AI Phase
Stanford, China, and the Next AI Phase
Why direction matters more than speed

Reading time: 5 minutes
🗞️In this edition
Xi Jinping says China will win with AI
Stanford experts predict AI in 2026
Tsinghua University on China’s scientific direction
In other AI news –
Microsoft invests $17.5B to scale AI across India
Why agentic AI looks great and then breaks
A new foundation for agentic AI
4 must-try AI tools
January doesn’t start with noise.
It starts with signals.
Some are spoken out loud, in speeches and predictions. Others appear quietly, in funding decisions, research papers, and new foundations that hint at what comes next.
This edition looks at those signals.
How governments, universities, and institutions are positioning for AI in 2026.
And what that tells us about where real momentum is building.
This is not about hype.
It’s about direction.
What's happening:
In his 2026 New Year address, Xi Jinping talked about AI in a way that shows exactly how China sees it.
AI wasn't mentioned as a side note. It was placed right next to space missions, military projects, and major infrastructure plans.
He talked about Chinese AI models competing to be the best, and he connected this directly to China making its own computer chips.
He wasn't talking about AI like a new tech product. He was talking about it like a source of national power, the same way countries talk about their military or energy systems.
AI was grouped with other major national projects. These are the kinds of things governments focus on when they want long term strength and independence.
This wasn't a product announcement. It was a statement about where China is putting its focus for the next ten years and beyond.
The language he used was about national modernization and building new economic strength. This is how governments talk when they're planning for decades, not just a few years.
Why this is important:
This shows how seriously China's government is taking AI at the highest level.
When a national leader connects AI with major national goals in a big speech like this, the whole approach changes. It stops being about tech companies and startups. It becomes about organizing everything around one direction.
China's new Five Year Plan starts in 2026. This is how China gets universities, companies, and local governments all working toward the same goals.
Xi said the country needs to connect science and technology deeply with industries. That means money, people, and policies will all move in the same direction.
This changes what competition looks like. It's not just about better technology anymore. It's about whether entire countries can get all their systems moving together toward the same goals over many years.
China is aligning its universities with what companies need, which aligns with what the government funds. This isn't about quick wins. This is about building momentum that lasts.
Countries that treat AI like roads or electricity don't just think about profits. They think about control, independence, and staying ahead for decades.
When Xi connected AI with making their own chips, he was making a point about not depending on anyone else. The US stopped selling advanced chips to China. Instead of slowing down, China made building its own chips a top priority.
This isn't just about China making good AI. It's about China being able to keep making good AI without needing parts or knowledge from other countries.
Companies plan for the next few years. Governments can plan for what things look like in 2035 or 2040.
Different time horizons mean different choices. Longer timelines mean more willingness to invest in things that won't pay off for a decade.
What's happening:
Stanford experts from different fields came together to predict what AI will look like in 2026.
The main theme across all their predictions is clear. The hype era is ending. The evaluation era is beginning.
After years of big promises and huge investments, 2026 is when people will start asking harder questions. Not "what can AI do?" but "how well does it actually work?"
Experts predict we'll see standardized tests for AI in legal work. Real time tracking of how AI affects jobs. Better ways to judge if medical AI tools actually help patients.
Countries will focus more on AI independence. They want to build their own systems or at least control where their data goes and who has access to it.
There will be more focus on smaller, better trained models instead of just making everything bigger. People are running out of good training data anyway.
The predictions also point to failures. Companies will admit that AI hasn't increased productivity in most areas yet. Projects will fail. People will learn what works and what doesn't.
Why this is important:
This matters because it signals a major shift in how seriously AI is being treated.
The focus is moving from excitement to measurement. From promises to proof. From potential to actual results.
When experts across medicine, law, economics, and computer science all say the same thing, it means something fundamental is changing. The question isn't whether AI is useful anymore. The question is how useful, for what, and at what cost.
Countries wanting AI independence changes the whole game. It's not just about who builds the best technology. It's about who controls it, where the data stays, and who benefits from it.
This push for independence will drive massive investments in data centers around the world. But there's a limit to how much money can be tied up in one thing. Some experts think this looks like a bubble.
The shift to evaluation also means accountability. When hospitals get flooded with AI startups all promising solutions, they need ways to judge what actually works. Does it help patients? Does it make staff more efficient? Does it create new problems?
The same applies to legal AI. Law firms will stop asking if AI can write and start asking how accurate it is, what risks it creates, and whether it actually saves time and money.
The emphasis on smaller, better models matters too. Bigger isn't always better. If you can get good results with less data and less computing power, you save money and energy. You also make AI more accessible to organizations that can't afford massive systems.
The acknowledgment of failure is actually healthy. When companies admit AI hasn't delivered productivity gains yet, it forces everyone to be more realistic. It shifts focus to the specific areas where AI actually works, like programming and call centers, instead of pretending it works everywhere.
Comments from the editor:
What stands out is the honesty.
These aren't people trying to sell AI. They're researchers trying to understand it. And what they're saying is that we need to slow down and measure what's actually happening.
The theme of evaluation over evangelism is important. For years, the conversation was dominated by what AI might do someday. Now it's shifting to what AI can prove it does today.
The predictions about AI sovereignty are particularly interesting. Countries don't want to be dependent on American tech companies for critical infrastructure. They're willing to spend billions to avoid that dependence.
This creates a more complex global AI landscape. Instead of a few companies dominating everything, you'll have different countries with different systems, different rules, and different priorities.
The focus on opening AI's black box in science is crucial. In fields like medicine, you can't just trust a prediction. You need to understand why the AI made that prediction. Scientists are working on ways to see inside these systems and understand how they reach conclusions.
The prediction about failed AI projects is refreshingly realistic. Not everything will work. Companies will waste money. People will get frustrated. But that's how progress actually happens. You learn from failure.
What's also notable is the shift from general purpose AI to specific applications. The experts aren't talking about artificial general intelligence. They're talking about AI that does particular tasks well. That's a more mature, practical approach.
The real test in 2026 will be whether organizations can move from hype to implementation. Whether they can figure out where AI actually adds value and where it doesn't. Whether they can measure impact honestly instead of just believing promises.
The advantage will go to whoever can evaluate AI systems accurately and deploy them strategically. Not to whoever has the biggest model or the most compute power.
Because in the end, technology only matters if it actually solves real problems for real people. And 2026 might be the year we finally start measuring whether AI does that or not.
What's happening:
Tsinghua University just released its first complete set of rules for how AI should be used in education across the entire campus.
This isn't a ban. It's a framework. A guide for students, teachers, and researchers on what's allowed, what's not, and how to use AI responsibly.
The guidelines cover everything from classroom teaching to research papers to graduate dissertations. They set clear boundaries while still encouraging innovation.
The core message is simple. AI is a tool, not a replacement. Teachers and students remain responsible for their work. AI can help, but it can't do the work for you.
The rules took two years to develop. The university interviewed over a hundred students and teachers from different fields. They studied guidelines from 25 other universities around the world.
Five main principles guide everything. Teachers and students are responsible for their work. All use must follow rules and maintain integrity. No sensitive data can be used with AI systems. Users must think critically and not trust AI blindly. The system must be fair and accessible to everyone.
For teaching, instructors decide how AI can be used in each course and must explain the rules clearly. Students can use AI as a helper but cannot copy AI outputs and submit them as their own work.
For research, graduate students cannot use AI to replace their own thinking and learning. Using AI to write papers for you or fake results is strictly forbidden.
Why this is important:
This matters because it's one of the first major universities to create comprehensive AI rules that cover everything, not just one area.
Most universities are still figuring out what to do about AI in education. Some ban it. Some ignore it. Tsinghua is trying to find a middle path.
The approach is "proactive yet prudent." Embrace the technology but set clear boundaries. This is important because it shows a mature response to a new challenge.
The focus on responsibility is key. The guidelines don't let people hide behind "the AI did it." If you use AI in your work, you're still responsible for what gets produced.
The emphasis on disclosure matters too. You have to say when you used AI. This creates transparency and prevents people from pretending AI generated work is entirely their own.
The warning about mental inertia is particularly important. When you rely too much on AI, you stop thinking for yourself. Your brain gets lazy. The guidelines specifically call this out as a risk.
The ban on using sensitive or classified data with AI systems addresses a real security concern. Many AI systems send data to external servers. That creates risks when the data is private or protected.
The point about algorithmic bias and the digital divide recognizes that AI isn't neutral. It can amplify existing inequalities if not managed carefully. Making sure everyone has access and that the systems are fair is part of responsible use.
What makes this significant is that Tsinghua isn't trying to stop AI use. They're trying to guide it. They want students and teachers to explore AI's potential while avoiding its dangers.
The framework is also designed to grow. As AI technology changes, the rules can adapt. This isn't meant to be a fixed document that gets outdated quickly.
Microsoft invests $17.5B to scale AI across India – Microsoft is betting that the next AI leap will come from reaching hundreds of millions of people, not just advanced markets.
Why agentic AI looks great and then breaks – A Stanford and Harvard paper explains why many AI agents impress in demos but fail in real, complex environments.
A new foundation for agentic AI – The Linux Foundation creates a neutral space to build shared standards for AI agents, signaling a move toward coordination over competition.
Rizemail - An AI-powered email summarization tool that helps users get to the core of their unread newsletters and long email threads
Kastro Chat - An AI-powered chatbot platform that allows businesses to create their own chatbots without any coding knowledge
Verbalate - A video translation and lip sync software designed to help businesses reach a global audience
Taranify - A platform that uses AI technology to provide mood-based recommendations for music, Netflix shows, and books
AI is entering 2026 with confidence.
But also with pressure.
The easy phase is over. The questions are harder now. About reliability, coordination, and long-term direction.
What will matter is not who talks the loudest, but who builds systems that hold up over time.
That is the lens we bring to OpenTools.
If one idea in this edition helped you see 2026 more clearly, reply and tell us. We read every message.
The OpenTools Team
Interested in featuring your services with us? Email us at [email protected] |


