In October the Silicon Valley Leadership Group held a “Chill Off” event, the purpose of which was to compare the performance of different approaches to cooling the data center. The event showed a sense of focus that will provide a certain number of gains for the data center, and the overall atmosphere of the event was energetic and insightful. But as the industry tries to address the sorts of power concerns the EPA described in its report to Congress in mid-2007, the types of changes discussed at the Chill Off may be limited in their effectiveness.
When customers approach us about data center power and cooling issues, we find that they face a number of typical situations: concern about the size of their utility bill; tactical issues with their power and/or cooling architectures; and resource limitations imposed by local utilities. These issues are serious, but there’s one issue that galvanizes organizations to make changes to their data centers: the growing needs for IT services and products versus the ability to meet those needs by deploying IT assets.
A company faces difficult choices when it discovers that its current inventory of facilities, power and cooling will not meet its increasing demands for compute power. The need to resolve this can place stress on the organization, causing divides between IT services staff, IT resource staff, and critical facility infrastructure staff.
However, there are solutions. A number of organizations are providing help with power and cooling issues, and a substantial body of work has been developed around the effects of power and cooling decisions on data center resource consumption.
The efficient data center is not just about power distribution efficiency or cooling architecture performance coefficients, it’s also about efficient utilization of IT assets and the amount of useful work a data center produces from the resources that it consumes. We’ve known for years that over-provisioning power and cooling drives down the efficiency of the data center. It should come as no surprise to anyone that over-provisioning of compute resources for applications has the same effect on overall data center productivity.
The finger-pointing starts when IT claims that Facilities is guilty of driving issues when it can be the IT policies around application engineering, equipment deployment and equipment operation that have the greater effect on a facility’s server capacity. Before IT places the responsibility for remediation on Facilities, it must put its own house in order.
The two most powerful weapons in the IT group’s arsenal for this particular challenge are virtualization and hardware refresh. Virtualization enables organizations to start managing their IT assets for utilization. Hardware refresh, when coupled with strategies for managing IT utilization, ensure that organizations are leveraging the improvements in performance per Watt that we at Dell embody in every succeeding generation of our products.
Events such as the Chill Off are great – they’re necessary to bring folks together to address key issues, but they need to stretch even further to help our industry address resource consumption issues. We need to focus greater attention on the IT side of the house and do more to contemplate changes needed within our potentially old-school and faulty mindset. The combination of these kinds of events and a greater emphasis on IT adaptations like virtualization and hardware refresh is also vital to bringing the industry together on issues of utilization andproductivity and building the body of knowledge to guarantee sustainable computing for the 21st century.
By John Pflueger, Technology Strategist, Dell Inc.