Huawei creates colossal AI supercomputers, makes thousands of chips think as one.

Huawei links thousands of processors into colossal AI supercomputers, pioneering a "strength in numbers" approach to innovation.

September 25, 2025

Huawei creates colossal AI supercomputers, makes thousands of chips think as one.
In a significant move poised to reshape the landscape of artificial intelligence infrastructure, Huawei has unveiled a groundbreaking strategy to create colossal AI supercomputers by networking thousands of its processors to function as a single, cohesive unit. At its HUAWEI CONNECT 2025 event, the company detailed an ambitious plan centered on a novel architecture that prioritizes massive-scale connectivity to overcome limitations in individual chip performance. This approach, born from a necessity to innovate amidst geopolitical pressures, signals a direct challenge to the current dominance of Western technology in the AI hardware space and outlines a new path for achieving the immense computing power required by next-generation artificial intelligence.
At the heart of Huawei's strategy is a system it calls the "SuperPoD," a design philosophy that shifts the focus from the power of a single processor to the collective strength of a networked whole.[1] The company's executives describe this as creating a "single logical machine" from thousands of separate processing units, enabling them to "learn, think, and reason as one."[1] This is made possible by a proprietary interconnect protocol named UnifiedBus.[2][3] Huawei claims this technology is the key to unlocking unprecedented scale, allowing for the deep interconnection of physical servers so they operate as one logical entity. The technical specifications presented are formidable, with Huawei stating its UnifiedBus technology can link as many as 15,488 of its Ascend AI chips into one system.[2][4][5] The company has made bold claims about the performance of this interconnect, suggesting it is multiple times faster than competing standards from rivals like Nvidia.[2][5] By focusing on the speed and efficiency of communication between chips, Huawei aims to compensate for any performance deficit its individual processors may have compared to the market leaders.[4][6]
This networking-first approach extends beyond a single SuperPoD. Huawei also detailed plans for "SuperClusters," which are large-scale computing systems composed of multiple SuperPoDs. The forthcoming Atlas 950 SuperCluster is projected to integrate more than 500,000 Ascend neural processing units (NPUs), with a future Atlas 960 SuperCluster expected to exceed one million NPUs.[7][8] For instance, the Atlas 950 SuperPoD, planned for late 2026, will be powered by 8,192 Ascend NPUs, while the subsequent Atlas 960 SuperPoD, due by the end of 2027, will support up to 15,488 NPUs.[8] This "supernode + cluster" architecture represents Huawei's core bet on system-level design to provide the scalable and sustainable computing power it believes is necessary for the future of AI in China and beyond.[9][10] The company has already seen some adoption, having delivered over 300 of its earlier generation Atlas 900 supernodes to more than 20 customers in various industries.[7]
Underpinning this large-scale infrastructure is a multi-year roadmap for Huawei's Ascend series of AI processors. The company announced a three-year plan that includes the Ascend 950, 960, and 970 series, with each generation intended to roughly double the computational capacity of its predecessor.[11][9] The roadmap begins with the Ascend 950 series in 2026, which will include variants optimized for different AI workloads like training and inference and will feature Huawei's own high-bandwidth memory.[11][6] The Ascend 960 is slated for 2027, followed by the Ascend 970 in 2028.[2] This public declaration of its chip development plans is a notable shift for the typically secretive company, signaling its confidence and determination to build a competitive and self-reliant AI hardware ecosystem.[12][5] This push for self-sufficiency is a direct response to U.S. export restrictions that have limited its access to advanced semiconductor manufacturing technologies from foreign partners.[13]
The implications of Huawei's strategy are far-reaching for the global AI industry. By leveraging its deep expertise in networking and telecommunications, the company is pioneering an alternative path to AI supercomputing that is less dependent on having the single most powerful chip on the market.[2][6] This "strength in numbers" approach could democratize access to large-scale AI training and inference capabilities, particularly for Chinese companies seeking alternatives to restricted foreign technology.[4][14] While Huawei openly acknowledges that its individual chips lag behind Nvidia's top-tier offerings, it is making a calculated wager that superior system-level design and massive scaling can close the overall performance gap.[4][6] The success of this strategy could intensify competition in the AI hardware market, accelerate innovation within China's tech sector, and potentially lead to a diversification of AI hardware solutions globally.[14] As the demand for AI computation continues to explode, the industry will be watching closely to see if Huawei's vision of a massively interconnected supercomputer can indeed make thousands of chips think, and compute, as one.

Sources
Share this article