AI's Power Hunger Forges New Data Center Era in Asia Pacific

Asia Pacific's data centers are rapidly transforming, adopting liquid cooling and green solutions to meet AI's insatiable power demands.

September 30, 2025

AI's Power Hunger Forges New Data Center Era in Asia Pacific
The rapid proliferation of artificial intelligence across the Asia Pacific is forcing a fundamental reckoning within the data center industry. As companies from finance to manufacturing integrate AI to enhance operations, the digital infrastructure underpinning these advancements is being pushed to its limits. Traditional data centers, designed for a previous era of computing, are proving inadequate for the immense power and cooling requirements of modern AI systems, which rely on dense clusters of high-performance GPUs. This paradigm shift, as highlighted by digital infrastructure provider Vertiv, is compelling operators to move beyond incremental upgrades and fundamentally rethink how data centers are designed, built, and operated to meet the voracious demands of AI workloads.
At the heart of the challenge is a dramatic increase in power density. Legacy data centers were typically built to support rack power densities in the 5 to 15 kilowatt range.[1][2][3] However, the specialized processors essential for AI model training and inference generate exponentially more heat and draw significantly more power.[1] GPU-based servers for AI can require 45 to 55 kW per rack, with some projections showing future workloads pushing densities toward one megawatt.[3][4] This leap in power consumption strains every aspect of a traditional facility's infrastructure. Conventional power distribution, including wiring and plugs, is often not rated for the electrical currents and higher temperatures associated with AI server cabinets.[2] Furthermore, the sheer weight of modern AI hardware can exceed the floor loading capacity of older buildings, requiring structural assessments and reinforcement.[2] This mismatch between the capabilities of existing facilities and the requirements of new technology has created a significant gap between the supply and demand for AI-ready data centers across the Asia Pacific region, even as overall capacity is expected to double.[5]
In response to this escalating pressure, the industry is pursuing a dual strategy of retrofitting existing sites and constructing new, purpose-built facilities. For many operators, retrofitting legacy data centers offers a path to support AI, though it presents considerable challenges.[1][6] This process involves extensive and costly upgrades, from installing high-voltage busways and advanced power distribution units to reinforcing floors.[1] However, the most critical adaptation is the move away from traditional air cooling. The longstanding "cold aisle/hot aisle" layouts, which rely on circulating chilled air, are stretched past their limits by the intense heat generated by GPU clusters.[1] While retrofits can be suitable for smaller-scale AI deployments, many organizations, particularly hyperscalers, are concluding that incremental changes are not enough.[4][6] This has given rise to the concept of the "AI factory"—a data center designed from the ground up to support high-density, AI-optimized infrastructure, treating intelligence as a manufactured product.[4][7] These new builds integrate power and cooling systems designed for the unique demands of AI from the outset, representing a more scalable and sustainable long-term solution.[4]
A cornerstone of this infrastructure evolution is the widespread adoption of liquid cooling. Because liquid is far more effective at heat transfer than air, it has become an essential technology for managing the thermal output of dense AI workloads.[8][9] The liquid cooling market in the Asia-Pacific region is experiencing robust growth as a result, with a projected compound annual growth rate exceeding 26%.[10][11][12] Data center operators are deploying a range of liquid cooling solutions. Direct-to-chip cooling, one of the most prevalent methods, circulates coolant through cold plates attached directly to processors to draw heat from the hottest components.[9] Another approach, immersion cooling, involves submerging entire servers in a thermally conductive dielectric fluid.[9][13] For facilities managing mixed-density environments or seeking less disruptive upgrades, rear-door heat exchangers can be installed on the back of server racks to cool air as it exits the equipment.[8][9] This transition to liquid cooling is not merely a technical upgrade; it is reshaping the physical layout of data centers, requiring new floor plans and sophisticated coolant distribution systems.[4]
The immense energy consumption required to power these next-generation data centers has profound implications for sustainability and regional power grids.[14] The International Energy Agency projects that global electricity demand from data centers could more than double by 2030, with AI being the most significant driver of this increase.[15] This surge puts immense pressure on national grids, many of which in the Asia-Pacific region still rely heavily on fossil fuels.[14][16] Consequently, the push to adapt data centers for AI is intertwined with the urgent need for a transition to clean energy. In response, governments and industry leaders are promoting the development of green data centers that integrate renewable energy sources.[14][16] Innovative solutions like waste heat recovery, where excess heat from data centers is redirected to warm nearby buildings, and smart grid integration are also gaining traction.[14] Ultimately, the AI boom is catalyzing a necessary evolution in the Asia-Pacific data center landscape, accelerating the shift toward more powerful, efficient, and sustainable digital infrastructure capable of powering the future of the region's economy.

Share this article