Schneider Electric: Rethinking Data Center Cooling for AI – The Rise of Direct-to-Chip Liquid Cooling

January 27, 2026
How Direct-to-Chip Liquid Cooling Supports AI Workloads Cooling, While Enhancing Sustainability & Performance
As artificial intelligence (AI) and high-performance computing (HPC) continue to push the limits of processing power, traditional approaches to data center cooling must evolve. Data centers must innovate to manage the substantial heat generated by increasingly powerful GPUs and CPUs.
One of the most effective emerging solutions is direct-to-chip liquid cooling, which supports AI workloads cooling by delivering efficient heat management while enhancing sustainability and performance.
Why Traditional Cooling Is No Longer Enough for HPC Infrastructure Cooling
Traditional air-cooling systems are struggling to keep up with the heat generated by today’s AI-driven workloads. Air cooling is reaching its limits as GPU heat density increases. Even at higher fan speeds, air lacks sufficient thermal capacity to dissipate heat efficiently, leading to hot spots, thermal throttling, and increased failure risk in AI and HPC environments.
In addition, air cooling uses significant electricity as fans run continuously to manage rising GPU heat. Data centers already consume an estimated 2% of the world’s electricity, a figure expected to increase as AI adoption grows. A different approach is needed for HPC infrastructure cooling.
Why Liquid Cooling Is Crucial for AI Data Centers
Liquid cooling is a critical solution, being up to 3,000 times more effective than using air cooling, enabling higher compute density while reducing energy usage. By directly absorbing and dissipating heat from the hottest components, such as GPUs, liquid cooling provides a better approach for AI workloads cooling. It maintains chips at optimal operating temperatures, enhancing performance and reliability.
What Is Direct-to-Chip Cooling in AI & HPC?
Direct-to-chip cooling is highly efficient for cooling generative AI models, in which coolant —typically water—flows directly across the hottest components, such as CPUs and GPUs. The coolant absorbs heat via cold plates attached to these chips and transfers it outside the data center to a heat exchange system. This approach minimizes the need for large fans, reducing energy consumption and freeing up valuable space for higher computing density.
Why Single-Phase Direct-to-Chip Cooling Is the Preferred Choice
In single-phase cooling, the coolant remains in its liquid state throughout the process, ensuring stable and predictable thermal transfer. Unlike two-phase cooling, where the liquid turns to vapor, single-phase systems offer simpler maintenance and higher reliability.
This makes single-phase cooling the go-to method for many data centers, especially those running AI and HPC workloads.
HPC Thermal Solutions That Enable Direct-to-Chip Cooling
Successfully implementing direct-to-chip cooling relies on a set of integrated HPC thermal solutions designed to manage the extreme heat generated by AI and high-performance computing workloads. Together, these technologies form a complete liquid-cooling ecosystem that delivers reliable, high-density data center cooling.
- Coolant Distribution Units (CDUs): CDUs control the temperature and flow of the coolant, ensuring it reaches the servers under the right conditions. They are essential for single-phase cooling systems, helping balance the entire liquid-cooling infrastructure.
- In-Rack Manifolds: These manifolds distribute the coolant throughout the rack, connecting to each cold plate with leak-proof, color-coded quick disconnects for easy maintenance. They are crucial for maintaining stable cooling across high-density computing setups.
- Cold Plates: Cold plates are mounted directly on CPUs and GPUs to dissipate heat more efficiently than traditional heat sinks. By using high-conductivity materials and direct contact with the hottest components, cold plates enable precise heat removal required to address AI workloads cooling. Cold plates are mounted directly on CPUs and GPUs, drawing heat away from these components more effectively than traditional heat sinks. Their superior thermal conductivity is vital for handling the high-power densities present in modern AI and HPC environments.
- Rear Door Heat Exchangers: Positioned at the rear of server racks, these exchangers dissipate heat from the coolant before it is recycled, ensuring efficient cooling across the data center.
The Future of AI & HPC Infrastructure Cooling: A Shift Toward Direct-to-Chip Solutions

As the demand for AI and HPC infrastructure continues to grow, the move from air cooling to liquid cooling—specifically direct-to-chip systems—will become increasingly important for AI workloads cooling. This technology offers the performance needed to handle data center cooling while significantly reducing energy consumption and CO2 emissions.
Organizations aiming to future-proof their AI and HPC infrastructure should consider adopting direct-to-chip cooling solutions to meet growing computational demands while staying energy-efficient and sustainable. To learn more, access the executive report “Optimizing AI Infrastructure: The Critical Role of Liquid Cooling” to see how a comprehensive, system-level approach integrates liquid cooling with existing data center infrastructure to support high-density AI workloads.
For more information on Schneider Electric solutions HERE












