We spent a lot of time on energy efficiency and it might be interesting to share our experiences with the community in order to exchange best practices.
It is very important to really think through your choice of hardware. In particular the energy loss of a power supply unit of a server is concerning. Google claims in a white paper that you are likely to loose 30 – 40% of power in a typical power supply unit. The reason is that the power supply unit converts AC to DC power which results in heat (ie loss of energy). Google however has developed servers which loose only 10% of the power.
Choosing hardware with efficient power supply units is obviously of paramount importance. But where do you find these efficient power supplies ? The answer is simple: you cannot. Humans have built nuclear reactors and sent people to the moon but efficient power supply units are still not around except for the likes of Google.
It is quite sad that despite all their green slogans that the hardware manufacturers have still done so little to really reduce global carbon emissions. Another example will add to my argument: operating temperatures. If you read any spec sheet, most servers will support environmental temperatures of 10 – 35 degrees celsius. This range is so wide that you have to ask yourself why datacenters are still running inlet temperatures of 21 degrees celsius. Therefore, the easiest solution to enhance power efficiency and help out mother earth is to increase the temperature in the datacenter. However, I do not hear hardware manufacturers advocating the raising of temperatures. I just wonder why.