A rather cool thing has been happening lately in the hot world of computer server farms: product manufacturers are serving up dramatic improvements in energy efficiency -- with scarcely an activist, regulator, or other pressure group to claim credit for it.
Except for customers, that is.
The problem, for the uninitiated, is that server farms -- those massive banks of computers that manage traffic for Web sites, email hosts, and company networks, among others -- are ravenous energy consumers. There are at least nine million servers in the U.S., operating 24/7, providing the bandwidth that allows businesses, individuals, and governments to store and serve up every type of data and media imaginable, from iTunes to IRS forms.
Servers are growing at an astonishing rate -- not just in numbers, but in speed, demanding ever-greater amounts of power. According to the research firm Gartner, there has been a significant increase in the deployment of high-density servers over the past twelve months, leading to huge power and cooling challenges for data centers. The energy needed for a rack of these high-density servers can be between 10 and 15 times higher than for a traditional server environment.
Here's just one amazing factoid: According to a study last year by Lawrence Berkeley National Lab (download - PDF):
"A single high-powered rack of servers consumes enough energy in a single year to power a hybrid car across the United States 337 times."
That's not all. Additional power is needed to remove the huge quantity of heat generated by these newer machines. If the machines aren't cooled sufficiently, they can shut down, with potentially devastating consequences to affected businesses, agencies, or other organizations. All told, server farms consume many times more energy than office facilities of equivalent size.
For those who operate server farms, this has become a nontrivial issue. While energy costs represent less than 10% of a typical company's information technology (IT) budget, that could rise to more than 50% in the next few years, says Gartner. For companies like Google, whose massive computing infrastructure, by one estimate, gives it "the largest utility bill in the planet," the push to make servers run cooler and more efficiently has taken on added urgency. According to Google engineer Luiz André Barroso, writing in the September issue of the Association for Computing Machinery's Queue:
The possibility of computer equipment power consumption spiraling out of control could have serious consequences for the overall affordability of computing, not to mention the overall health of the planet.
So the IT world is stepping up, with manufacturers of chips (AMD, Intel) and servers (Dell, HP, IBM, Sun) competing as feverishly on energy efficiency as they do on speed and other performance characteristics. (Click on the preceding links of each to see their respective take on energy issues.)
They're collaborating, too. "We’re looking for other companies, especially ones that have been leaders, to start to share what they know," Dave Douglas, VP Eco Responsibility at Sun Microsystems, told me recently. He said Sun plans to post its own energy use per building on the Internet. "It may or may not be useful to lots of people but I get lots of questions, ‘What is a reasonable amount of greenhouse gas emissions for office employees in a certain locality?' That kind of data is really hard to find today. If we start to have everybody sharing their best practices, share where they’re at, share which projects are working and which one’s aren’t, there’s a lot of value to be found in all that information. There’s a very strong parallel to the open-source community."
Electric utilities, for which the gluttonous energy consumption of servers represents a significant potential threat to power plant and grid stress, are getting into the act. Last week, Pacific Gas & Electric in California announced the first-ever utility financial incentive program to support "virtualization projects" in data centers. Virtualization allows multiple applications to run concurrently on computing equipment, thereby enabling customers to consolidate their data centers and remove a large portion of their existing servers. Qualifying PG&E customers can earn a rebate of up to $4 million per project site, based on the amount of energy savings achieved. In addition to the rebate, PG&E customers can expect to save $300 to $600 in annual energy costs for each server removed. Those savings nearly double when reduced data center cooling costs are taken into account. (PG&E may also be the first utility to set up a dedicated Web page focusing on the needs of high-tech companies.)
It should be noted that the politicians haven't been entirely removed from the picture. Last July, the U.S. House of Representatives approved a bill that calls for a six-month U.S. EPA study on data center efficiency. (The bill was referred to the Senate Committee on Energy and Natural Resources, where it sits awaiting approval from that body.) The specter of congressional scrutiny led an industry consortium called Standard Performance Evaluation Corporation in May to establish a set of benchmarks for servers. The consortium -- whose members include HP, IBM, and Sun -- are hoping the benchmarks will allow them to establish uniform energy-efficiency goals.
Governmental interest notwithstanding, all of the technological progress to date has come through voluntary action on the part of chip and server manufacturers, the result of healthy competition spurred by pressure by customers to solve a burning (and costly) problem.
It serves as an exemplary model of how industry players, simultaneously innovating, competing, and cooperating, can create profitable environmental solutions themselves -- a model those in other sectors would do well to copy.