Demand for data centres is accelerating rapidly as artificial intelligence, cloud computing and digital services continue to expand. This growth is placing increasing pressure on energy systems and reshaping how data centre infrastructure is designed, powered and cooled.
According to the International Energy Agency (IEA), global data centre electricity consumption reached around 460 terawatt-hours (TWh) in 2022, representing roughly 2% of global electricity demand. As AI workloads and hyper scale infrastructure expand, this could rise to between 620 and 1,050 TWh by 2026.
In the UK, the impact of this growth is already being felt. National Grid has identified digital infrastructure, including data centres, as one of the fastest-growing sources of electricity demand, particularly around London and the South-East where major facilities are clustered.
Developers therefore face growing pressure to deliver facilities that are not only resilient and scalable but also energy efficient and compatible with constrained electricity networks.
For many years the industry has relied heavily on Power Usage Effectiveness (PUE) as the primary indicator of efficiency. While PUE has helped drive improvements in infrastructure performance, the scale and complexity of modern facilities means that energy strategy must now go beyond a single metric.
Why PUE alone is no longer enough
Power UsageEffectiveness (PUE) was introduced by the Green Grid consortium to measure the efficiency of data centre infrastructure. The metric compares the total power entering a facility with the energy consumed directly by IT equipment.
Over the past decade, the focus on PUE has helped improve infrastructure efficiency significantly. Many hyperscale facilities now operate with PUE values close to 1.2, compared with much higher levels in earlier generations of data centres.
However, PUE only measures infrastructure efficiency. It does not reflect the carbon intensity of electricity supply, the broader environmental impact of operations or the pressure large facilities place on energy networks.
As a result, PUE is increasingly recognised as one element of a wider energy strategy rather than the sole measure of performance.
AI is reshaping energy demand
The rapid growth of artificial intelligence is one of the most significant drivers of energy demand within the data centre sector.
Training and operating large AI models requires extremely high levels of processing power and significantly higher server densities. Increasing both electrical demand and cooling requirements. The IEA has warned that electricity demand linked to AI-related computing could grow rapidly as organisations deploy increasingly complex models.
At the same time, hyperscale data centre capacity has more than doubled globally since 2017 as companies expand digital infrastructure to support growing demand.
For project teams delivering new facilities, energy strategy must consider not only current operational requirements but also the long-term evolution of computing technologies and infrastructure needs.
Cooling innovation is becoming central to efficiency
Cooling systems remain one of the largest contributors to energy consumption in data centres.Research from the Uptime InstituteGlobal Data Center Survey suggests cooling infrastructure typically accounts for 30–40% of total facility energy use.
As computing densities increase, traditional air-cooling approaches are becoming more difficult to maintain efficiently, particularly in environments supporting AI and high-performance computing.
This is driving interest in alternative cooling technologies such as:
- Direct-to-chip liquid cooling
- Rear-door heat exchangers
- Immersion cooling
- Hybrid air and liquid cooling strategies
These technologies allow facilities to operate at higher computing densities while maintaining reliability and improving energy performance.
However, adopting these approaches often requires changes to building services design. Cooling architecture, plant configuration and redundancy strategies all influence the long-term efficiency and adaptability of a facility.
For project teams designing new facilities, cooling strategy is becoming a key design decision. Early input from building services engineers can help ensure infrastructure is designed with sufficient flexibility to accommodate future computing densities and evolving cooling technologies.
Waste heat recovery and circular energy systems
Another are attracting increasing attention is waste heat recovery.
Data centres generate large amounts of heat during normal operation. In some locations this heat can be captured and reused within district heating networks or nearby commercial and residential buildings.
Across parts of Europe, particularly Scandinavia and the Netherlands, data centre heat is already being integrated into district heating systems. In the UK, similar opportunities are beginning to emerge as part of wider efforts to improve energy efficiency and support low-carbon heating strategies.
However, heat reuse depends on several factors including temperature levels, proximity to heat demand and infrastructure availability. As a result, it is not always straightforward to implement in practice.
Grid capacity is becoming a critical constraint
Beyond operational efficiency, access to electricity supply is becoming a major challenge for many developers.
Large data centre developments require substantial grid connections, and in parts of the UK connection capacity is already under pressure. National Grid’sFuture Energy Scenarios report highlights that digital infrastructure will play an increasingly significant role in future electricity demand.
Developers are therefore exploring strategies such as on-site energy generation, battery storage and renewable power purchase agreements. Understanding how a data centre interacts with the wider energy system is becoming an essential element of project planning.
Designing energy strategy from the outset
Given these pressures, energy strategy must be considered from the earliest stages of design rather than as a late-stage engineering exercise.
Key considerations increasingly include:
- anticipating future computing densities and IT loads
- designing cooling systems that can adapt as technologies evolve
- understanding grid capacity and connection constraints
- integrating energy resilience and redundancy strategies
- evaluating opportunities for heat recovery
Facilities designed without sufficient flexibility may require expensive upgrades as computing requirements evolve.
One challenge increasingly encountered on new data centre projects is designing infrastructure that can accommodate rapidly changing computing densities. Facilities designed around today’s server loads may struggle as rack densities increase or new cooling technologies are introduced. Retrofitting cooling or electrical infrastructure after a facility is operational can be complex and costly, making flexibility in plant and power systems an important consideration.
Looking beyond a single metric
The rapid growth of digital infrastructure means energy strategy will remain central to data centre development.
While PUE remains a useful indicator of infrastructure efficiency, modern facilities require a broader approach that considers resilience, cooling innovation, grid capacity and long-term sustainability.
Addressing these challenges early in the design process, with input from experienced building services engineers, can help ensure that data centres remain efficient, adaptable and capable of supporting future technological demands.
At Green Building Design, we provide building services engineering expertise to help shape robust energy strategies, optimise cooling and electrical infrastructure, and design flexible systems that support long-term performance. For more information on how we can support your project, contact us today.

.png)




