Discussion about this post

User's avatar
Ruben's avatar

The other day I asked Qwen if they provide an offline version, like Claude.

The answer was yes. I asked how much I would have to pay to use it ...

Zero was the answer. Just felt ripped of my $20 a month for Claude... After coming from a slow and annoying Chat GPT ...

I'm starting to move where the democratisation of AI is: in China 😏

Why pay US billionaires to use their tools when I can get the same for free 🤷‍♂️ from a country we're I lived 4 years, and witnessed as really comfortable for the most, particularly the youths.

Claude? ChatGPT?

Better I stall & train straight into Qwen, Zhipu, moonshot or deepseek or the any other Chinese ones 🤗

Why get a Tesla when a BYD offer better for much less headache.

Leon Liao's avatar

AI competition is shifting from a contest over model capability into a broader competition over energy, capital, market access, and institutional organization.

China’s low-cost models are effectively putting a ceiling on global token pricing. U.S. AI companies may not collapse immediately, but the valuation framework could shift from a 15–30x ARR SaaS multiple to a 3–8x commoditized compute multiple. This is the most striking judgment I took from @chinarb’s piece.

One correction I would add: it is true that American developers are increasingly using Chinese models while U.S. models have almost no access to the Chinese market. But the reason is not simply that U.S. models are being kept out by China’s regulatory firewall. It is also that Anthropic, Google, and OpenAI have chosen to restrict or close services to users in mainland China. That is deeply unfortunate.

China’s advantage is not merely that its models are cheaper. Its advantage lies in its ability to organize low-cost energy, cloud infrastructure, open-source ecosystems, application markets, and patient capital into a low-cost inference system.

The United States has an open market, but also a structure shaped by capital return requirements, high electricity prices, elevated valuations, and the pass-through of infrastructure costs to households. China has a more closed domestic market, brutal internal competition, lower electricity costs, longer capital patience, and a stronger willingness to export low-priced AI services outward. Silicon Valley sits in between: it still leads at the frontier of model capability, but it is increasingly constrained by U.S. capital markets and the physical cost structure of the American power system.

This is exactly where my argument in “The Great Partition of Global AI” becomes more relevant. The global AI system is not splitting into two fully isolated blocs, but it is being partitioned layer by layer. The top layer of frontier models, high-end chips, national-security applications, and sensitive data will become increasingly securitized. The middle layer of open-source models, enterprise tools, application frameworks, and developer workflows will remain partially connected. The bottom layer of inference, deployment, cloud routing, hardware adaptation, and cost optimization will become the real battlefield. John Steinbach’s $281 electricity bill is not just a local utility story. It is a symptom of how AI partition is moving from chips and models into power grids, capital costs, cloud pricing, and household bills.

This also explains why China’s AI position cannot be understood only through the lens of model benchmarks. China may still lag the United States at the very frontier of closed proprietary models, but it is building a different kind of advantage in the deployment layer: cheaper power, brutal domestic competition, open-weight model diffusion, cloud ecosystem bundling, hardware adaptation around Ascend and domestic chips, and a massive internal market where models are forced to become cheaper and more usable. In my framework, this is the lower and middle layer of the global AI partition. The U.S. may still dominate the frontier layer, but China is trying to shape the cost structure and deployment logic of the global inference layer. That is why the next phase of AI competition will not be decided only by who has the best model, but by who can turn intelligence into a low-cost, high-volume, globally routable utility.

No posts

Ready for more?