Two pieces that I read recently which got me thinking about these dynamics are:
1) Full Stack: China's Evolving Industrial Policy for AI (Rand, Kyle Chan & Co)
2) The American Deep Seek Project (Nathan Lambert)
We have arrived at an odd moment in AI where the leading US labs have stopped publishing their research while open source leadership has been handed to the Chinese. An unexpected development given stereotypes around both political systems.
Full Stack spends a considerable amount of time diving into Beijing's policy goals and support for its home grown AI industry across the stack: from chips to energy to talent and data to through to applications. There are clear advantages of having a more coordinated effort: pooling compute, data, and talent and promoting open source model releases so top research can compound between labs, and ultimately diffuse into applications - from public services through military, industrial and consumer applications.
Chinese officials appear very focused not necessarily on "winning the AI race" but more on reducing key vulnerabilities across the entirety of its stack - working towards self-sufficiency - while pushing aggressively to deploy these new emerging capabilities throughout its society. Instead of racing for the "God-model," there appears to be a lot more thought going into broad diffusion.
The first is a colossal undertaking: trying to become the first nation on earth to stand up an entire AI supply chain in house. The biggest liability in the chain is clearly access to compute - specifically leading chips and the EUV machines / fabs to make cutting edge chips at scale.
While the coordinated state policy clearly has benefits, the current direction has also led to a marked reduction in FDI, a shriveling of the venture industry and hit to the capital markets, and generally a squeezing out of many private players via explicit public participation. The US might not have the same level of national coordination, but its capital markets more than make up for it, plowing hundreds of billions behind individual labs in the race for AGI, despite those efforts being fragmented.
To me, it seems that the US is still better positioned despite the extremely impressive open results released (seemingly weekly) by Chinese labs from DeepSeek to Qwen to Kling to Kimi etc, largely due to compute. In the shift towards post-training, the sheer volume of compute matters immensely. Unlike pre-training which favors massive centralized runs, post-training will let US companies leverage their dominant share of data center capacity globally.
In short, it feels like both sides have something to learn from the other. The Chinese in realizing that many of their long-term goals will prove difficult without extremely robust participation from the private markets globally, and on the US side, the need to galvanize open source efforts despite capitalist incentives generally being less favorable to near-term profits, but generally lead to long term gains as research compounds and can more easily diffuse through the rest of the economy.
Both pieces are worth the read.