One of the major problems when operating a datacenter is to cool your serverrooms in the most efficient way. Servers are growing faster, bigger, more power consuming then ever, causing more heat in your racks. Improving the cooling systems will reduce costs, might give your datacenter more capacity (more servers with the same cooling capacity) and is a good thing for the environment (and don’t tell me you don’t want to be a green). In this article I will discuss the solutions EvoSwitch (Leaseweb housing facility) has taken to improve the quality of cooling and reduce costs.
Before looking into possible solutions, you will have to know how servers work. In the old days, cooling was not a big issue as the boxes were big and power consumption low. Cooling was done by placing a fan somewhere in the machine. If you were lucky, you had multiple fans (in/out-blowing fans). How they were placed was up to the manufacturer, causing different solutions. Without an industry standard, it was possible that one server blew it’s hot air into another server.
During the last 10-years the industry standard became horizontal cooling. Almost every server uses horizontal cooling: cold-air intake at the front and hot-air outlet at the back of the server.
The traditional method of cooling in datacenters is based on vertical airflow. Cold air enters the serverrack at the bottom via the raised floor, and leaves the rack at the top. While the air flows to the top of the rack, temperature of the airflow increases while passing the servers. Servers located at the top of the rack are cooled with the warmest air. Best place for your servers: bottom of the rack.
An improved method is based on the same methodology, but with horizontal airflow. Cold air is blown into the aisle in front of the rack through perforated tiles into the servers. Fans in the server blow out the hot air at the back of the server. This is the most used cooling solution in datacenters.
Most solutions are based on this principle of horizontal airflow. In very high density environments datacenter sometimes use liquid cooling (water) at the back of the rack to lower the outlet temperature.
In our experience with other datacenters, we often saw that the whole serverroom was cooled, even the back of the serverracks. Mixing cold and hot air is very inefficient. It is unnecessary to have cold air at the back of your server. As pointed out before, a server uses horizontal cooling, meaning they need cold air in front. In these traditional datacenters, you would also see that hot air could flow through the back of the rack to the front. Instead of having cold air, servers could end up with the hot air. An unwanted situation for your servers.
To stop the short circuit of cold and hot air in the serverrooms, EvoSwitch decided to create so-called Cold Corridors. Cold Corridors are fully isolated from the rest of the serverroom. By cooling only this isolated corridor you create a ‘Cold Corridor’. EvoSwitch uses ‘blind-plates’ to fill unused rackspace preventing hot air from the flowing to the front. To close the cold corridor they placed a roof on every aisle and closed the sides by sliding doors. By doing so, a cold corridor is created.
Inside the cold corridor
Outside the cold corridor
When creating cold corridors it is also possible to raise the room temperature as long as the cold corridor stays cold. In this case, the room temperature is around 28-30 degrees Celcius, while the cold corridor temperature is around 22-24 degrees Celcius. It is possible to raise the hot aisle temperature even more, but I doubt our customers would like to work in a tropical environment. By raising the hot aisle temperature (in this case the serverroom temperature) the coolers work more efficient (up to some degree, the larger the delta T (difference between airco inlet temperature and outlet temperature) the better the efficiency). This (simple) solution resulted in a direct efficiency improvement of approx. 15%.
To further improve the cooling efficiency they have also looked at the airflow under the raised floor. In most traditional datacenters, structered cabling is done below the raised floor. You will have to put extra energy in the airflow to bypass every obstacle. To minimize all effects on the airflow, EvoSwitch does it structured cabling above the racks. Besides having no effect on the airflow anymore, another positive aspect arise: being able to perform maintenance on the cabling without opening your complete floor :)
To further reduce costs EvoSwitch looked into ways to optimize the cooling systems. In most situations, compressors are used to chill water, which in turn is used to cool the air. EvoSwitch is using free-coolers. In the Netherlands ambient temperature is below 15 degrees for 50% of the year. Instead using the compressors to chill the water, using free coolers realised energy savings.
By using the cold-corridor and free coolers, EvoSwitch increased the efficiency while reducing energy costs.