Cost Model: Data Centres



Introduction

Nowadays, the world runs on data. With 4.66 billion active internet users globally, each generating an average 1.7Mb of new information per second, data centres are one of the fastest-growing and most important components of the global economy. As more of the world’s population comes online, and with the advancement of smart technologies such as AI, the internet of things, cryptocurrencies, and autonomous vehicles, this growth will only accelerate.


By 2025 data usage is projected to have increased tenfold from 2018 levels. To cope, more data storage space is needed. This is leading to a boom in data centre new-build schemes, with construction set to expand by nearly 10% per annum between 2018 and 2025.


Demand for new data centre construction is primarily being driven by the global move to cloud computing and storage. This has created a need for the hyperscale data centre as owned and developed by the likes of AWS, Microsoft, and Google. Colocation facilities continue to provide variable amounts of data centre white space to other client tenants, as well as providing these cloud giants with facilities on a wholesale basis.


Enterprise (owner-operated) data centres still exist, but a high proportion of development is within the hyperscale market as most companies and individuals use this for computing and storage.


Data centres provide a secure and resilient operating environment for IT equipment, and are used to access, store and process data. Although essentially industrial buildings, data centres are highly functional, and their design requirements are changing quickly – not least when it comes to meeting strict carbon reduction requirements and providing adequate cybersecurity.


Design considerations

The data centre industry is under pressure to bring resilient, efficient, and secure projects to market quickly. In addition, clients are increasingly asking to complete site selection, acquisition, design, and construction work as energy-efficiently and innovatively as possible. When designing a data centre, it is critical that the following three elements are considered from the outset:


1. Scalability: The flexibility to accommodate change without major works to the physical or IT infrastructure is key to strong data centre design. The need to refresh IT servers, often on a three-year cycle, introduces a design requirement for high-frequency equipment change. As a data centre site tends to operate for at least two decades, it must be designed to handle the significant technology upgrades and updates that will occur many times throughout its operational life.


2. Resilience: A data centre’s critical load is comprised of all the hardware components that make up the IT business architecture. Any new design must build in sufficient resilience to ensure any component failure or required maintenance activities does not compromise the critical load.

Typically, resilience is measured and defined by the data centre’s tier rating, a standardised, industry-accepted mechanism devised by the Uptime Institute. The system ranges from tier 1 to tier 4, a tier 1 project will have a simple design with minimal backup equipment and a single path for power and cooling, whereas a tier 4 data centre will be designed to be fully fault-tolerant and with redundancy for every single component.


3. Connectivity: Selecting systems that use cloud-based, mobile-friendly data centre infrastructure management (DCIM) software, which can be shared and managed remotely, allows users to leverage information beyond that of their data centre, enabling benefits such as predictive maintenance. This helps to maximise the life of components while ensuring data centre availability and performance, as components such as batteries can be replaced before they fail.

In addition, there has been a significant increase in the use of AI to monitor and manage power and cooling at data centre sites. This information can be used for detailed reporting and analysis and to improve energy efficiency. However, depending on the level of integration and the extent of the information that is being generated and managed, incorporating this kind of software can come at a significant cost.

With these three considerations in mind, the key features of a data centre can be designed.


The question of cooling

Data centres are characterised by very high power and heat load density: current typical data centre loads range from 1,500W/m2 to 3,000W/m2. Keeping data centres cool is therefore one of the most critical elements of their design.


Over the years, server equipment has become more resilient and can now tolerate a greater range in temperature and humidity levels than older technology allowed. Nevertheless, for the older, larger and more inefficient data centres, as much as 40% of their energy is still used simply on keeping their servers cool.


There are many factors that drive the selection of any cooling option, not least capital and lifecycle costs, but also the location of the data centre and the feasibility of incorporating innovations. Understanding the cost drivers and benefits of each are crucial to advising clients effectively.


Solutions include passive cooling, which uses natural ventilation to remove heat from the building, and immersive liquid cooling, where servers are immersed in a rack filled with coolant that can have more than 1,000 times the heat capacity of air. The coolant absorbs the heat from the servers and is then removed from the rack.


In the northern hemisphere, air-cooling solutions favour free cooling, which works by using air from outside combined with reclaimed heat in the winter, and evaporative cooling in the summer, to provide a total cooling solution throughout the year.


Water-based cooling options are another approach: these cool equipment by pumping cold water through pipes or plates. Water-cooled systems work well but have an inherent risk of leaks.


Many operators are now integrating AI and machine learning into their cooling systems. The AI learns to apply the optimum amount of cooling at any given time, driving up efficiency and reducing energy use.


Location helps, too. A number of global cloud providers have located their data centres offshore, using seawater to cool their facilities, while Microsoft’s Project Natick is experimenting with siting data centres on the seabed to keep temperatures down. Data centres can also simply be built in colder climates – such as Iceland, where BMW’s data is stored – or the Baltic region, which hosts a number of the world’s global hyperscale data centre facilities.


Another approach, which is generally required by local authorities in the Baltic region, is to recycle waste heat and use it for district heating schemes. The Condorcet data centre in Paris transports its waste heat directly into the nearby Climate Change Arboretum, where scientists are studying how high temperatures affect plants. In Switzerland, the heat from an IBM data centre warms a nearby swimming pool.


As data centre growth continues, finding innovative ways to use data centre heat for nearby homes and businesses is an important way that centres can be better integrated into communities and contribute to wider decarbonisation efforts.


Resilience and availability

Resilience is at the forefront of good data centre design, as it is the primary function of a data centre and its associated infrastructure – with the goal of maintaining operational uptime, avoiding unplanned downtime, and providing the necessary capacity to support digital services. Multiple systems must be considered in the context of delivering resilience in a data centre. These include:

  • Security facilities: as well as the physical security installations, which monitor and control access within and around the facility, there must be provision in terms of cybersecurity, which is an ever-increasing concern for data centre operators, as the scale and nature of data security threats increase.

  • Physical separation is required to minimise risks associated with the location of equipment and distribution paths. Fire separation may need to be included between redundant equipment, and power and cooling distribution may have to run in different fire zones.

  • Power supplies: high-voltage (HV) requirements can vary, with some facilities connecting to the primary grid for a more reliable supply; some may require two separate HV supplies, each rated to the maximum load. A fully diversified supply is a higher-cost solution.

  • Standby plant: Where possible data centres use the mains power as the primary power source (ideally generated from local renewable sources) with standby power from on-site generation. A single generator, “N”, is usually adequate for tier 1 data centres. Tier 2 facilities tend to need a number of generators, or N+1 depending on load/rating. Higher-tier ratings demand redundant plant to meet concurrent maintenance and fault tolerance criteria.

  • Central plant (chillers, UPS and generators): varying levels of backup plant are used to offset the risks of component failure or maintenance cover. Data centres designed to tier 3 and below may use an N+1 redundancy strategy, whereby one unit of plant is configured to provide overall standby capacity for the system. The cost of N+1 resilience is influenced by the degree of modularity in the design. For tier 4 data centres, systems must be fault-tolerant, which generally results in fully redundant plant and distribution.

  • Distribution paths (pipework/ductwork and cabling/switchboards): power supplies and cooling may be required on a diverse basis, which avoids single points of failure while also providing capacity for maintenance and replacement. Cost impacts stem from the point at which diversity is provided. Tier 3 and 4 facilities require duplicate systems paths feeding down to practical unit level.

  • Final distribution: power supplies to the racks holding computer equipment are the final links in the chain. Individual equipment racks may or may not have two power supplies.

  • Water supply can be a key factor in system resilience, particularly where water-based cooling solutions are used.

Cloud-based resilience is an increasingly important method of maintaining uptime. Cloud and internet organisations are now managing resilience at both the IT and infrastructure level; they can mirror data across multiple sites, and handle resilience within the IT platform. Consequently they are able to use multiple facilities with individually lower resilience, reducing cost and often providing an overall greater level of resilience.

The higher the resilience requirements, the greater the capital and running cost implications of a data centre project. For example, a tier 4 data centre will be 50% larger and three times more expensive per unit of processing space than an equivalent tier 2 data centre.


Upgrading existing data centres

As data centres near the end of their planned operational life, they can suffer from an increased risk of failure and reduced efficiency. Yet they can also represent an excellent opportunity. An upgrade or refurbishment of an existing facility can cost significantly less than a new-build, and can be achieved in a far shorter time frame, since there is no need for planning approvals or additional utility connections. Upgrade schemes also have significantly reduced embodied carbon compared with a new-build project.


To achieve net zero emissions, it is not enough to establish a carbon-neutral power supply. The owners and operators of a data centre must quantify and measure all greenhouse gas emissions during 24/7 operation, as well as the embodied carbon used in the production and transportation of capital goods built, used and demolished during the data centre’s life. The embodied carbon of an upgraded brownfield data centre is therefore likely to be far lower than that of a brand-new site requiring a carbon-intensive construction phase.


Upgrading legacy installations is an effective way to increase capacity without space and carbon footprint increases. It also offers the opportunity to design in crucial long-term benefits, such as strengthening competitiveness, reliability, safety, flexibility and environmental integration, as well as security and monitoring.


Refurbishing a data centre makes it better equipped to minimise downtime and respond to incidents, through careful redesign of the redundancy and resilience of power supplies and critical mechanical systems.


Using advanced technologies for cooling and heat recovery, modernised data centres are also better able to integrate into their community environments. A key component of redesigning the centre will be studying and analysing factors such as air flow, heat propagation, audible noise, and electromagnetic compatibility.


Other areas to consider in extending the life of an ageing facility include the need to elevate the data centre operating temperature, upgrade servers and systems, improve the system layout and rack layout for power and cooling efficiency, consider supplemental or alternative cooling schemes, the availability and reliability issues in power distribution and finally the availability of data centre power, including the potential for alternative power sources.


Crucially, as data becomes increasingly important to customers, its appeal as a target to criminal or terrorist organisations grows. Any refurbishment or upgrade project must invest in bringing cybersecurity up to current standards, incorporating globally integrated monitoring and security systems, both electronic and physical, to defend against all kinds of attacks and respond to incidents if they occur.


Current challenges, trends – and the road ahead

As the data centre sector has matured, its needs and demands have become clearer. End users are now better placed to choose between construction of their own data centre or engaging with a third-party provider, whether through colocation, managed or cloud services, or engaging with the upgrade of an existing centre.


These decisions are underpinned by a better understanding of the total cost and time frame of ownership, which typically covers a period of five to 10 years. Institutions and funds are therefore now able to invest with greater certainty, as unknown costs can be minimised with the right level of management capability. However, this also means that data centres are becoming more commoditised. Competition is fierce as third-party and outsourcing organisations strive to lower costs and differentiate themselves from the competition.


Instability of materials and skills supply are ongoing issues. The construction market is experiencing considerable volatility and inflationary spikes in materials prices, including for metals and timber. Data centre projects, being services-intensive, are particularly exposed to increased steel and copper prices. In the short term at least, the sector will need to monitor inflationary trends and the wider supply market closely.


There are many hurdles for data centres to overcome in the coming decades, not least meeting the net zero carbon challenge, protecting data from malicious attacks and preventable leaks, and addressing geographical and geopolitical challenges. The challenges facing the wider technology industry are also complex: maintaining talent, minimising supply chain disruption, and implementing sustainable, environmentally sound solutions.


For those designing data centres, security, decarbonisation and innovation are the watchwords to ensure that projects are fit for purpose in this extremely fast-moving sector.


Sources:

RICS

Research Gate

Building UK

Statista

Science Direct

PBC